paper_id
stringlengths
9
13
venue
stringclasses
171 values
year
stringclasses
7 values
paper_title
stringlengths
0
188
paper_authors
stringlengths
4
1.01k
paper_abstract
stringlengths
0
5k
paper_keywords
stringlengths
2
679
paper_content
stringlengths
0
100k
review_id
stringlengths
9
12
review_title
stringlengths
0
500
review_rating
stringclasses
92 values
review_text
stringlengths
0
28.3k
review_confidence
stringclasses
21 values
CMivR3x5fpC
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment
["Zeyu Zhao", "Nan Gao", "Zhi Zeng", "Guixuan Zhang", "Jie Liu", "Shuwu Zhang"]
This paper presents the Anonymous entry to the Generation and Evaluation of Non-verbal Behaviour for Embedded Agents (GENEA) Challenge 2023. The system is originally designed for few-shot scenarios such as generating gestures with the style of any in-the-wild target speaker from short speech samples. Given a group of reference speech data including gesture sequences, audio, and text, it first constructs a gesture motion graph that describes the soft gesture units and interframe continuity inside the speech, which is ready to be used for new rhythmic and semantic gesture reenactment by pathfinding when test audio and text are provided. We randomly choose one clip from the training data for one test clip to simulate a few-shot scenario and provide compatible results for subjective evaluations. Despite the 0.25% average utilization of the whole training set for each clip in the test set and the 17.5% total utilization of the training set for the whole test set, the system succeeds in providing valid results and ranks in the top 1/3 in the appropriateness for agent speech evaluation.
["speech-driven gesture generation", "motion graph", "few-shot"]
ABSTRACTThis paper presents the CASIA-GO entry to the Generation andEvaluation of Non-verbal Behaviour for Embedded Agents (GE-NEA) Challenge 2023. The system is originally designed for few-shot scenarios such as generating gestures with the style of any in-the-wild target speaker from short speech samples. Given a groupof reference speech data including gesture sequences, audio, andtext, it first constructs a gesture motion graph that describes thesoft gesture units and interframe continuity inside the speech, which∗Corresponding author.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanthe author(s) must be honored. Abstracting with credit is permitted. To copy other-wise, or republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10...$15.00https://doi.org/10.1145/3577190.3616118is ready to be used for new rhythmic and semantic gesture reen-actment by pathfinding when test audio and text are provided. Werandomly choose one clip from the training data for one test clipto simulate a few-shot scenario and provide compatible results forsubjective evaluations. Despite the 0.25% average utilization of thewhole training set for each clip in the test set and the 17.5% to-tal utilization of the training set for the whole test set, the systemsucceeds in providing valid results and ranks in the top 1/3 in theappropriateness for agent speech evaluation.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) ; •Computing methodologies →Animation.KEYWORDSspeech-driven gesture generation, motion graph, few-shotICMI ’23, October 9–13, 2023, Paris, France Zhao et al.ACM Reference Format:Zeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, and Shuwu Zhang.2023. Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reen-actment. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERAC-TION (ICMI ’23), October 9–13, 2023, Paris, France. ACM, New York, NY,USA, 7pages. https://doi.org/10.1145/3577190.36161181 INTRODUCTIONGenerating co-speech gestures that convey rich non-verbal infor-mation remains challenging due to the indeterministic nature ofthe task. The one-to-many mapping between the modalities, alongwith other difficulties such as the lack of high-quality large-scaledatasets and standardized evaluating protocols, makes it difficultto design and evaluate models for speech-driven gesture genera-tion. In recent years, data-driven methods have attracted the in-terest of many researchers in the field. However, most of thesemethods require training on large-scale datasets. How to producegestures in common scenarios where training data are insufficient,such as reenacting gestures with new styles naturally encoded invery few recorded gesture samples of an in-the-wild target humanperformer, is rarely discussed.In this paper, we try to address this problem by designing a sys-tem that can explicitly locate key positions of rhythmic and se-mantic events in the sequences to form basic units of gestures anddescribe the continuity relationships inside. Part of that is com-ing from the commonly agreed observation [ 1,23] that while mostco-speech gestures are in synchronization with the rhythm of thevoice, some gestures are more relevant to the actual meaning ofthe words or sentences. The other part is that it should be able toproduce new gesture units that break the natural continuity rela-tionships between units for good diversity performance. Inspiredby [23], we find that motion graphs and related searching algo-rithms are most suitable for this task. With the gesture sequence,audio, and text of a reference speech and the audio and text of anytest speech, the main idea is to construct a motion graph that de-scribes the soft gesture units and continuity relationships insidethe reference speech and search the graph for new paths of ges-ture frames given the test speech, as shown in Figure 1. Numerousmodifications and improvements such as new pruning strategies,feature-based initialization, and fallback measures, can be made tothe framework to enable compatibility with pure gesture data in-stead of video frames. These are proved to be the key factors forthe feasibility, performance, and robustness of the system.To gain better knowledge of how well the results produced bythe system can be, we participate in this year’s GENEA Challengeto evaluate our results reenacted from few-shot data and comparethose with results from other systems that utilize large-scale data.To do this, we simulate a few-shot scenario by randomly choosingone clip in the provided training set as the whole reference speechfor each clip in the test set, regardless of any speaker identity. Foreach test speech, the system only utilizes 0.25% of the whole train-ing set on average. In such a way, the system utilizes 17.5% of thewhole training set for the whole test set. Despite the low utiliza-tion of the training data, The system succeeds in producing high-quality gestures for the test set and achieves good performance inthe challenge.2 RELATED WORKSLarge-scale data-driven methods are becoming exceedingly popu-lar in recent years for speech-driven data generation tasks [ 15], tak-ing over rule-based methods [ 14] or probabilistic modeling meth-ods [ 10]. Basic deep learning models show great capabilities of en-coding input data and generating new gestures [ 3,20]. New ar-chitectural designs that fit the specific properties of the task suchas skeleton hierarchies or gesture categories are proposed to im-prove the performance of gesture generation [ 1,13]. New gener-ative models can also be utilized as backbones of the generationnetworks [ 19,24].The mixed usage of matching-based and learning-based meth-ods can also be seen in numerous works to bypass limitations ofdeep learning models [ 4,18]. Motion graphs are proposed to gen-erate controllable animation from pre-recorded motion [ 5] and arecommonly used in gesture-related tasks such as retrieval and cre-ation [ 6,16]. For speech-driven data generation, they can be uti-lized by defining each graph node as the feature of a sequence ofgestures [ 22], or defining each node as a video frame [ 23]. Inspiredby these works, we find motion graphs are suitable for our task fortheir inter-frame relationship description capabilities, regardlessof the presence of learning-based modules. Thus, we design motiongraphs for reenacting gestures from few-shot reference gesture se-quences instead of large-scale data or video frames.3 DATA PROCESSINGThe dataset provided by the challenge organizers this year [ 7] isderived from the Talking With Hands data [ 9]. Gesture sequences,audio, text, and speaker labels of both the main agent and the in-terlocutor are included in the dataset, making it a dyadic datasetcompared to the monadic dataset last year. As mentioned above,our system does not utilize all training data provided. Instead, weuse the training set to simulate a few-shot scenario where only asmall amount of data is available as reference speech. For the testset, only the audio and text data of the main agent in the test clipsare utilized by the system. For each clip, only one clip in the train-ing set is randomly chosen as the reference speech, of which onlythe gesture, audio, and text data of the main agent are utilized bythe system. Other data including anything relevant to the inter-locutor, the speaker labels, and the validation set are ignored bythe system.The data are preprocessed using the utilities provided by [ 2], in-cluding converting between Euler angle and exponential map ro-tation representation, selecting the 25 joints on upper and lowerbody excluding the fingers, and aligning the text to gesture frames.Since the system can work with gestures with any skeleton defi-nition, the skeletons used inside the system are in both exponen-tial map rotation representation and position representation. Thewords in the text are pre-converted to integer indices. Due to thepoor quality of the hand tracking and some significant flickeringon the body, we have to add 19 clips in the training set to the ran-dom selection blacklist, lock the yaw and pitch rotation of the 4wrist-related joints, and apply the Savitzky-Golay filter with a win-dows length of 15 and polynomial order of 3 on the roll rotation ofthe 4 wrist-related joints.Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment ICMI ’23, October 9–13, 2023, Paris, France4 METHODThe gesture motion graph is a graph structure that can be usedto represent the continuity relationships between frames in a ges-ture sequence regardless of the length or the skeleton definitionof the sequence, as shown in Figure 2. Following [ 23], each nodein the graph represents a frame in the gesture sequence, and eachdirected edge between two nodes indicates the distance betweenthe two frames is small enough for the transition to be consideredcontinuous. Given a reference gesture sequence and its correspond-ing speech audio and text, we can construct its gesture motiongraph by detecting key nodes that non-uniquely split the gesturesequence into subsequences of soft gesture units and analyzing thecontinuity relationships between frames to find edges for unnatu-rally continuous frames. When we need to reenact a new test ges-ture sequence from its speech audio and text, we can split the testsequence into subsequences using the positions of the same kindsof key frames detected in the test speech and use a pathfinding al-gorithm to find the optimal paths of nodes in the graph correspond-ing to every test subsequence. Then a new gesture sequence that isrhythmically matched to the input speech audio and semanticallyrelevant to the input text can be reenacted by concatenating andblending the gesture frames along the paths. Due to random oper-ations in some fallback measures, the system may produce slightlydifferent results at some parts for the same input.Figure 2: A sample gesture motion graph with zoomed viewsof examples of a) a regular node, b) an onset node, c) a key-word node, d) a break node, e) a natural edge, and f) an un-natural edge.4.1 Graph Construction4.1.1 Key node detection. After adding all frames in the gesturesequence as regular nodes into the graph, we first perform onsetdetection on the reference speech audio to find onset nodes in thegesture motion graph. The onsets are located at the backtrackedpeaks of the audio’s spectral flux viewed as the onset strength [ 12],aligned to the gesture frames. Filtering on the onset strength cancontrol the number of output onsets, which further controls thelength of soft gesture units used for reenactment. Then we per-form the keyword detection on the reference speech text to markkeyword nodes in the gesture motion graph. With the input textaligned to the frames, each word is checked to see if it belongs to alist of keywords (see [ 23]). If a subsequence of one or more repeat-ing keywords is found in the text, the node corresponding to thefirst frame of this subsequence is then marked as a keyword nodewith that keyword. Also, there might be interruptions inside thespeech when e.g. the speech is a composition of multiple discon-tinuous segments. Any frame that is not continuous with the nextframe is marked as a break node .4.1.2 Continuity analysis. We first directly add directed edges tothe graph with zero weights for the frames that are naturally con-tinuous. Then we traverse every pair of different non-continuousframes as “left” and “right” frames pl,prand calculate their dis-tance. Here, the distance between two gesture frames, or poses, isdefined to be the weighted sum of the Euclidean distance of thejoint positions and the Euclidean distance of the joint velocities:dpose(pl,pr)=λpos∥pl−pr∥2+λvel∥vl−vr∥2,where the velocities vl,vrcan be calculated by differentiating thecurrent and previous frames, and λpos, λvelare the weights of thetwo terms. For every left frame, a dynamic threshold for continuityis defined to be the mean distance between the left frame and itsfollowing (up to) lcnframes. This threshold is used to filter out theright frames with distances that are too large to be considered con-tinuous frames. After filtering, every remaining right frame adds acandidate directed edge to a list (not to the motion graph) withits pose distance to the left frame as the weight. However, thiscriterion of continuity can produce a large number of neighboredright frames for a left frame and frequently generates short loops inthe graph. Thus, we perform two pruning operations to reduce thenumber of candidate edges. For each left frame, the first strategyis, for a continuous sequence of up to lpnright frames in the candi-date list, we only reserve the first one and remove the others. Thesecond strategy is, for the remaining right frames, one is removedif another edge, that starts in the lpn-neighbor of one frame andends in the lpn-neighbor of the other frame, already exists in thegraph. After the pruning, we add all candidate edges to the graphand move on to the next left frame.4.2 Pathfinding4.2.1 Beam search. The core of the path-finding algorithm is aparallelized greedy breadth-first search algorithm known as thebeam search [ 8] for each test subsequence. Given the target pathlength lsub, the termination criteria for paths, and lnpaths initialstarting nodes, the beam search algorithm outputs lnpaths pathswith top- lnpaths minimum costs that have different lengths. Theselnpaths paths are initially one-node paths with only the given start-ing nodes. As shown in Figure 3, at each iteration, we initializeICMI ’23, October 9–13, 2023, Paris, France Zhao et al.an empty watch list and check if the lnpaths paths are already ter-minated. All terminated paths are directly added to the watch list,and all unterminated paths are expanded by appending the chil-dren of the last node. If the last node of a path has multiple chil-dren, it should be split into multiple paths each with a child ap-pended, which are then all added to the watch list as well. Then,we calculate the costs of all watched paths and select those withtop- lnpaths minimum costs, which are then set to be the new lnpathspaths. Here, the cost of a path Pis defined as the sum of the weightsof the edges along the path, penalized by the difference betweenlengths of this path lpathand the test subsequence lsub:cpath(P)=λw©«plpath−1Õi=p1wi,i+1a®¬+λlen1−lpathlsub,where wi,jis the weight of the edge (pi,pj), and λw, λlenare theweights of the two terms. The algorithm repeats these steps andbreaks when the maximum length of searching is reached or alllnpaths paths are accepted (see appendix). Finally, the accepted pathwith the lowest cost is chosen for the current test subsequence.Figure 3: An example of two iterations of the beam searchprocess. Each iteration expands all children nodes of thelast nodes of the presented paths. The expanded paths arethen sorted and selected according to their costs. Termi-nated paths are in green.4.2.2 Conditional termination. For each test subsequence, we setthe termination criteria independently based on various consider-ations. Normally, if the test subsequence ends at a keyword frame,the paths should terminate at any keyword node in the graph withthe exact same keyword to produce semantic gestures. Otherwise,the paths should terminate at any onset or break node in the graphto produce rhythmic gestures. If no accepted path is found after thebeam search is forcibly stopped, we should re-initialize the start-ing nodes and retry searching. Fallback measures (see appendix)can also be designed to guarantee that the beam search can stopwith at least one accepted path in most cases. If no retry is needed,the beam search of the next subsequence will take the subsequentnodes of the ending nodes as initial starting nodes, which keepsthe reenacted gestures as naturally continuous as possible.4.2.3 Feature-based initialization. For starting node initialization,a method based on key node features is designed for the beamsearch to increase the possibility of finding a path that costs less.The feature of a key node fis defined to be a list of lengths of thelfeattrailing natural subsequences split by any key node, ignoringthe unnatural edges:fi={fi−fi−1, fi+1−fi, . . . , f i+lfeat−1−fi+lfeat−2},where fjis the frame number of the key node with the index 1≤j≤lkin the ordered list of all lkkey nodes, fj=0when j=0,and fj=flkwhen j>lk. For a test subsequence, we calculate thefeature distance between the starting key node ktand each keynode in the graph km:dfeat(kt, km)=λfull∥wfull⊙ (ft−fm)∥2+λfirst1−fm,1ft,1+λoccom,where wfull∈ [0,1]lfeatdefines the weight for each element of thefeature, f·,1represents the first element of the feature, ⊙is thesymbol of element-wise multiplication, omis the occurrence countof the key node kmalready accepted in paths for the whole testspeech, and λfull, λfirst, λoccare the weights of the two terms. Thetop- lnpaths key nodes with minimum distances are selected to bethe initial starting nodes. Fallback measures (see appendix) guar-antee that there always are lnpaths starting nodes initialized forsearching after retries.4.2.4 Blending. After the beam search for every test subsequence,we obtain a list of paths of pose frames in the gesture motion graph.As shown in Figure 4, we design a blending mechanism to smooththe transition between paths, as they are most likely to be discon-tinuous. For two paths that are needed to be concatenated, we callthe last (up to) lblend frames of the first one left path Pland the first(up to) lblend frames of the second one right path Pr. We generate apath of new gestures for the concatenated left and right paths Pc:Pc=(1−wblend) ⊙ ( Pl⊕ ({Pr,1} ×min(lr, lblend)))+ wblend ⊙ (({ Pl,ll} ×min(ll, lblend)) ⊕Pr),where wblend is the weight vector, ⊕is the symbol of concatena-tion, ×is the symbol of repeating all elements in a vector, P·,iis thei-th node in a path, and ll, lrare the lengths of left and right paths.The weight vector can be generated by linear, sigmoid, or otherfunctions that map evenly-placed values to the range of (0,1). Forskeletons defined as exponential map rotations of the joints, wecan also convert those to quaternions and use spherical linear in-terpolation (SLERP) to blend the rotations, instead of using directweighted sum.5 EVALUATIONSTo evaluate the effectiveness of the system, we generate resultsusing the mentioned data and method with the following configu-ration: λpos=λvel=1,lcn=5,lpn=10,lnpaths =20,λw=λlen=1,lfeat=10,λfull=λfirst=1,λocc=0.5,wfull={1,0.5,0.5,0.2,0.2,0.2,0.1,0.1,0.1,0.1}, and minimum onset strength threshold 5.Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment ICMI ’23, October 9–13, 2023, Paris, FranceFigure 4: An example of the blending process. The green andblack paths are blended to form a blue path (left), which isthen blended with another black path to form a red path(right).5.1 Subjective EvaluationThe generated results in Euler angle rotation representation (con-verted from exponential map) are submitted to the challenge orga-nizers and evaluated by the human evaluators recruited from sixEnglish-speaking countries [ 7]. Three aspects of the generated re-sults are evaluated and released to the participants, including thehuman-likeness, the appropriateness for agent speech, and the ap-propriateness for the interlocutor. We do not discuss the last onesince it assumes that the systems are interlocutor aware, which isnot the case for our system. No objective evaluation result is avail-able to the participants. Videos used in this evaluation are availableathttps://zenodo.org/record/8211449 .5.1.1 Appropriateness for agent speech evaluation. As mentioned,to simulate a few-shot scenario, for each test clip (minimum 60seconds, maximum 77 seconds, 62.4 seconds on average), only onetraining clip is randomly chosen as the reference speech. For the 70given test clips, 70 different training clips are finally chosen. Eachchosen training clip (minimum 60 seconds, maximum 427 seconds,170.2 seconds on average) only constitutes a tiny portion (mini-mum 0.088%, maximum 0.627%, 0.25% on average) of the wholetraining set (68069.9 seconds). For the whole test set, only 17.5%of the training data are utilized to produce the results. Despite thelow utilization of the training set, the results generated by our sys-tem (labeled SK) got a good mean appropriateness score (MAS) of0.18±0.06, ranking fourth among the 12 participants (top 1/3). Thefull results can be found in Table 1and Figure 5. This shows that thesystem is able to produce high-quality results that are comparablewith systems utilizing large-scale datasets.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 5: Bar plots visualising the response distribution inthe appropriateness for agent speech study [ 7].Table 1: Appropriateness for agent speech [ 7]Condi-MASPref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC −0.02±0.04 49.1% 72 284 1057 314 76 18035.1.2 Human-likeness. However, our system did not get a satis-fying median score ( 37∈ [35,40]) in the human-likeness evalua-tion, ranking ninth among the 12 participants. Since our systemreenacts new gestures from the raw gesture frames of the refer-ence gesture sequence, the quality of the results is heavily affectedby the quality and the length of the reference data. Flickering orother defects existing in the naturally continuous frames and thelower-than-needed training data utilization can be possible causesof the low ratings given by the evaluators. Also, the blending pro-cess can only guarantee smooth transitions between paths. If toomany transitions occur in a very short time span, it may give theevaluators some non-humanlike impression. In a word, increasingthe quality of the reference speech data and using more trainingdata as reference speeches may give a better score in this evalua-tion.5.2 Ablation StudyPruning strategies, feature-based initialization, fallback measures,and other new designs for the gesture motion graph are key fac-tors for the feasibility, performance, and robustness of the system.To justify this, we also conduct ablation studies using the resultsin joint position representation. We evaluate our system in threesetups on three objective metrics. The weak detection setup re-moves proper filtering measures in onset detection (with minimumonset strength threshold 0). The weak pruning setup degradespruning operations in continuity analysis ( lpn=1). The weak ini-tialization setup initializes random starting nodes in the beamsearch algorithm. The first metric is for motion synchronization(Syn) [17], which calculates the differences between velocity mag-nitudes of the generated and ground truth gestures at each frame.Note that the results of such distance comparisons cannot accu-rately measure the quality of the generated gestures. The secondmetric is a score for beat consistency (BC) [11] that measuresthe beat correlation between gestures and speech audio by calcu-lating the mean distance between the audio onsets and the nearestICMI ’23, October 9–13, 2023, Paris, France Zhao et al.Table 2: Ablation study resultsSetup Syn↓ BC↑ Div↑ #FailureWeak Detection 0.61393 0.021577 0.06101 0Weak Pruning 0.57947 0.022278 0.05795 0Weak Initialization 0.58290 0.021982 0.06639 0No Term. Fallback - - - 51Full 0.57866 0.022087 0.07461 0peaks of angle change rate. The third metric is for gesture diver-sity (Div) [21]. It calculates the ratio of large angular changes of ve-locities between frames and uses that to indicate the frequency ofmotion changes. Finally, another no termination fallback setupthat disables all termination fallback measures is added and thenumber of failures (stuck in infinite loops) during pathfindingis counted to demonstrate the necessity of these measures. We seein Table 2that although weak setups sometimes produce gestureswith a better rhythmic score, they perform much worse in velocitysimilarity to ground truth or gesture diversity. Moreover, the sys-tem fails 51 times out of 70 (73%) without the fallback measures,showing that these designs are necessary for the graph to workwith few-shot gesture data.6 CONCLUSIONIn this work, we propose a system for reenacting gestures in few-shot scenarios where very few reference samples are available basedon gesture motion graphs. The input reference gesture and speechdata are analyzed and a gesture motion graph with descriptions ofthe interframe continuity and key rhythmic and semantic events isconstructed. Given the test speech, a path of blended pose framescan be searched from the gesture motion graph to form a new se-quence of reenacted gestures. The evaluations show that the sys-tem can generate high-quality results comparable with methodsdesigned for large-scale data, and the new designs succeed in pro-viding robust performance for the system.Nevertheless, this system has its limitations in multiple aspects.For example, although the requirement for data size is reduced, thereference data still need to be high quality for reenactment. Also,the construction and search processes are manually designed basedon human prior knowledge with some of the thresholds that needto be tuned manually. We can explore learning-based methods thatcan enhance the mechanisms of key node detection, path cost, etc.ACKNOWLEDGMENTSThis work was supported by the National Key R&D Program ofChina (2022YFF0901902).
-YDL9XdiEF
The paper proposed a few-shot method based on motion graph. The idea is well motivated with the clear exposition that allows replication. Overall a solid paper that will be of interest for the workshop and challenge attendees
8: Top 50% of accepted papers, clear accept
## Paper Summary The paper proposed a co-speech gesture synthesis method based on motion matching. Specifically, it builds a gesture motion graph by detecting key nodes that split the motion sequences into multiple sub-segments. A transition edge going from one node to another is also identified based on similarity distance between frames. To avoid excessive nodes and edges, two pruning strategies are utilized to reserve only an important subset of edges. The motion synthesis is done by doing a beam search within the built motion graph to find best candidate path that matches the input speech. ## Strength The proposed method is well-motivated and the method design are discussed in details for reproducibility. While motion graph has been applied in various applications before, it is interesting to see the method based on motion matching being utilized in gesture synthesis. The method allows using only a subset of training motions to achieve competent results. By using explicit motion matching for synthesis, the method is also able to offer more clarity from the evaluation results about what works or not (i.e. higher appropriateness with keyword nodes in the graph). ## Weakness Some paragraphs may need more details or clarifications. For example, in Line 271, the definition for discontinuous frames is not well discussed. Similarly, in Section 4.2.3, it may be helpful to have a formal definition for the feature $f$ for a key node. Would be helpful to discuss further about the few-shot property of the proposed method. Specifically, the reviewer is interested to learn about the trade-off between the data utilization and the quality of gesture synthesis. For example, how feasible is the method to utilize only X minutes of gesture motions and still produce reasonable results. ## Rating Justification Overall the paper proposed a new method based on motion graph. The main advantage is the few-shot learning, which only requires a smaller subset of training data to achieve competent results. The exposition is clear and includes interesting ideas for replications and future improvements. I believe the paper will be helpful for the workshop and challenge attendees.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
xPQcKA56N4j
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Discrete Diffusion for Co-Speech Gesture Synthesis
["Ankur Chemburkar", "Shuhong Lu", "Andrew Feng"]
In this paper, we describe the gesture synthesis system we developed for our entry to the GENEA Challenge 2023. One challenge in learning the co-speech gesture model is that there may be multiple viable gesture motions for the same speech utterance. Therefore compared to a deterministic regression model, a probabilistic model will be preferred to handle the one-to-many mapping problem. Our system utilizes the vector-quantized variational autoencoder (VQ-VAE) and discrete diffusion as the framework for predicting co-speech gestures. Since the gesture motions are produced via sampling the discrete gesture tokens using the discrete diffusion process, the method is able to produce diverse gestures given the same speech input. Based on the user evaluation results, we further discuss about the strength and limitations of our system, and provide the lessons learned when developing and tuning the system. The subjective evaluation results show that our method ranks in the middle for human-likeness among all submitted entries. In the the speech appropriateness evaluations, our method has preferences of 55.4% for matched agent gesture and 51.1% for matched interlocutor gestures. Overall, we demonstrated the potential of discrete diffusion models in gesture generation.
["gesture synthesis", "computer animation", "neural networks"]
ABSTRACTIn this paper, we describe the gesture synthesis system we devel-oped for our entry to the GENEA Challenge 2023. One challenge inlearning the co-speech gesture model is that there may be multipleviable gesture motions for the same speech utterance. Thereforecompared to a deterministic regression model, a probabilistic modelwill be preferred to handle the one-to-many mapping problem.Our system utilizes the vector-quantized variational autoencoder(VQ-VAE) and discrete diffusion as the framework for predictingco-speech gestures. Since the gesture motions are produced viasampling the discrete gesture tokens using the discrete diffusionprocess, the method is able to produce diverse gestures given thesame speech input. Based on the user evaluation results, we furtherdiscuss about the strength and limitations of our system, and pro-vide the lessons learned when developing and tuning the system.The subjective evaluation results show that our method ranks inthe middle for human-likeness among all submitted entries. In thethe speech appropriateness evaluations, our method has prefer-ences of 55.4% for matched agent gesture and 51.1% for matchedinterlocutor gestures. Overall, we demonstrated the potential ofdiscrete diffusion models in gesture generation.CCS CONCEPTS•Computing methodologies →Intelligent agents ;Animation ;Neural networks .KEYWORDSgesture synthesis, computer animation, neural networksACM Reference Format:Ankur Chemburkar, Shuhong Lu, and Andrew Feng. 2023. Discrete Diffusionfor Co-Speech Gesture Synthesis. In INTERNATIONAL CONFERENCE ONMULTIMODAL INTERACTION (ICMI ’23 Companion), October 9–13, 2023,Paris, France. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3610661.3616556Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.36165561 INTRODUCTIONCo-speech gesture synthesis is an important capability for drivingvirtual character movements in conversational interactions withhuman users. It plays an essential role in augmenting the virtualhuman with non-verbal behaviors that mimic actual human commu-nications in addition to speech lip-syncing animations. However, itis not trivial to synthesize gesture motions that are both human-likeand correspond well to the speech input.In general, the process of gesture generation from speech tomotion is a non-deterministic one-to-many mapping, which indi-cates that multiple gestures could correspond to the same speechinput to convey a similar meaning. For example, a left-hand beat, aright-hand beat, or a beat involving hands will all be appropriaterepresentations of a beat motion corresponding to an utterance.Therefore instead of using deterministic models [ 13,40,41] to pre-dict gestures, the recent methods utilized the probablistic frame-works [ 2,23] by sampling the latent space to accommodate thenon-deterministic natures of gesture synthesis.For the GENEA challenge [ 21], we have developed our gesturesynthesis system based on vector-quantized variational autoen-coder (VQ-VAE) and denoising diffusion probabilistic models. Weassume that by utilizing the discrete tokens, the gesture synthesisproblem could be regarded as token sampling based on the pre-dicted logits. This allows gestures that are far apart in the motionspace to be still mapped to the same input utterance. By leveragingthe disentanglement of information in the latent space of VQ-VAE,the system gains the potential for controllable gesture synthesis.The diffusion methods have been adapted successfully for variousapplications including image and motion synthesis [ 10,35,44]. Themotivation for our system is to utilize these recent developments ingenerative models for gesture synthesis. One more insight for em-ploying the diffusion process is that diffusion models are inherentlyrobust to noise and uncertainty in the data. We aim to reduce jit-tering results generated by many previous methods. Diffusion caneffectively denoise corrupted inputs by stepping backward throughthe diffusion process, aiding in data recovery and reconstructiontasks. Specifically, we first learn the discrete latent codes from theinput motions using VQ-VAE. These codes are then used by thediscrete denoising diffusion probabilistic models (D3PM) to learnthe denoise process. By learning the denoising model in the discretelatent space, the method is able to leverage the synthesis strengthfrom the diffusion process while also greatly reducing the compu-tational costs by requiring much fewer diffusion steps to converge.After predicting the discrete codes, the model then reconstructsICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.the gesture motions through the decoder of VQ-VAE. From thesynthesis results, we found that the method is able to produce di-verse gestures with good motion dynamics. A demonstration videoshowcasing our results can be accessed by visiting the providedlink: here."2 BACKGROUND2.1 Co-Speech Gesture SynthesisIn the realm of speech gesture synthesis, traditional rule-basedapproaches have relied on manually created sets of gesture units,employing predefined rules and heuristics to generate gesturesbased on linguistic and contextual information [ 5,19,25]. Someapproaches have attempted to extract gesture units from train-ing speech-gesture pairs [ 12,16]. However, these methods havestruggled in accurately estimating gesture attributes and effectivelyforming units, thereby impacting the final quality of results.In contrast, learning-based approaches have emerged, whereincertain methods utilize speech-gesture pair data to train end-to-end models that directly predict co-speech gestures, treating thetask as a regression problem from speech to gestures [ 6,14,20,40].However, a significant challenge arises when a single speech inputcorresponds to multiple variants of gestures, as the regression modeltends to average the gesture poses, resulting in inferior outcomes.This challenge is commonly referred to as the one-to-many mappingfrom speech to gestures issue.Recent advancements have approached gesture synthesis in aprobabilistic framework, enabling the generation of multiple ges-ture sequences from a single speech input through latent spacesampling [ 1,2,7,23,24,27]. Nonetheless, as the length of thesequence increases, the process of generating data sequentiallybecomes time-consuming, and the dependency information is lostas each element relies on the previously generated ones [29].Based on the aforementioned points, we propose our model thatcombines the VQ-VAE and diffusion techniques to tackle thesechallenges and enhance the synthesis of speech gestures.2.2 Discrete Latent Space LearningA VAE (Variational Autoencoder) is a type of generative model thatlearns a compressed representation of input data by mapping it toa lower-dimensional latent space, typically modeled as a Gaussiandistribution, using an encoder. In the case of VQ-VAE, the latentspace is discretized into a finite set of codebooks [ 36]. This allowsfor the encoding of original gestures into small, trainable data unitsusing vector quantization. Recent model design and training tech-niques have been focusing on improvements for learning the latentspace reconstructions. For instance, Jukebox [ 9] trained separateVQ-VAEs on data with different resolutions by hierarchically down-sampling the input data. RQ-VAE [ 30] reduces the reconstructionerrors by recursively quantizing the feature maps using a fixed-sizecodebook.One known issue in VQ-VAE is codebook collapse [ 30], wheremultiple embeddings in the codebook collapse and become identicalor nearly identical during training. This collapse leads to a loss ofdiversity in learned representations and can adversely affect modelperformance and generation quality. Several techniques have beenproposed to mitigate codebook collapse, including re-initializingunused codes to random vectors during each training iteration [ 9],normalizing mean squared error (MSE) for reconstruction [ 39], andupdating codebook embeddings with exponential moving averages[30].VQ-VAE method typically utilizes autoregressive transformersto learn a probability distribution over the latent space during thegenerative stage. However, autoregressive models often strugglewith capturing long-range dependencies in the data, as each el-ement’s conditioning is limited to the previous elements. In thiswork, we instead applied discrete diffusion to enlarge the samplingwindow size without negatively affecting the performance of thegenerated sequences.2.3 Denoising Diffusion Probabilistic ModelsDiffusion models have emerged as a prominent approach in imagesynthesis and motion generation, showcasing their ability to gen-erate complex and realistic results. In contrast to autoregressivegenerative models, diffusion models provide greater flexibility withreduced error accumulation during inference and are well-suitedfor parallel training since they are not constrained by step-by-stepsampling [10, 17, 31–33].In the continuous diffusion process, the target data array, suchas gesture motions in our case, undergoes an iterative injectionof Gaussian noise through a forward Markov process until purenoise is obtained. In the subsequent reverse process, the modellearns to gradually denoise the sample. The diffusion transformerframework has found application in motion synthesis domains,including tasks like audio-conditioned gesture generation [ 43] thatcan effectively handle long-term dependencies in gesture sequences.Several notable adaptations of diffusion models have been made forhuman motion synthesis as well, such as generating raw motionframes [ 35] and improving jittering problems through time-varyingweight schedules for noise estimation [ 8]. In the realm of gesturesynthesis, Ao et al. [ 3] leverage a latent diffusion model and apply aContrastive-Language-Image-Pretraining strategy [ 28] to learn therelationship between speech transcripts and gestures. Additionally,Zhu et al. [ 46] focus on ensuring temporal coherence by tailoringtheir Diffusion Co-Speech Gesture framework in the context ofgesture synthesis.Diffusion models can also be extended to discrete data, includingcategorical labels or text. For example, D3PM [ 4] utilizes a transitionmatrix in the noising step to handle discrete data. Another variant,the VQ-Diffusion model [ 15], combines a VQ-VAE with a conditionalDDPM variant to model the latent space for text-to-image synthesis.In our system, we adapted the discrete diffusion model to producegesture token sequences based on input conditions.3 DATA PRE-PROCESSINGThe training data for the GENEA Challenge 2023 is based on asubset of the Talking with Hands (TWH) dataset [ 22]. The datasetincludes the entirety of dyadic interactions, with audio and speechtext features from both the main agent and interlocutor.In accordance with [ 42], we undertook analogous data prepro-cessing procedures.For input gesture representation, we first down-sampled the input motions to 30 fps and applied a sliding window of64 frames with a step size of 10 frames to produce gesture samples.Discrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, FranceEach gesture sample is converted into a tensor of size T×J×D,whereT=64is the sliding window size, Jis the number of joints,andDis the size for joint rotation representation.We also use D=6as the representation for joint rotations basedon previous research [ 45] to prevent singularities and reduce ro-tation approximation errors. The pose dimension we used is 153,which includes 6D rotation vectors for 25 joints and the root transla-tion. For each gesture sample, our target is to predict the main agentposes, and we combine the audio features from both the main agentand interlocutor as the input conditions to our model. Followingthe baseline data processing scripts provided by the organizers, theaudio features include Mel-frequency cepstral coefficients (MFCCs),spectrogram, and speech prosody. We concatenate all three featuresfor both agents into the final speech audio features.4 METHODThe method implemented in our system uses a two-stage architec-ture to train the gesture synthesis models; the first stage involveslearning discrete tokens using VQ-VAE, while the second stagemakes use of the discrete diffusion process to learn conditionaltoken distributions. Figure 1 presents a summary of our approachbased on discrete diffusion.4.1 Discrete Gesture Token LearningWe employ a latent space vector quantization model that has beenspecially trained on the realm of three-dimensional human gestures.When given a human gesture represented by a sequence of posesg∈RL×Dg, where Ldenotes the length of the gesture sequence andDgdenotes the dimensions of a single gesture frame, an encoderEconverts these frames into gesture tokens or snippets s∈Rl×h,where ldenotes a number significantly less than Landhdenotesthe latent dimension. Then, using a discrete quantization techniqueDQand a learned codebook Cwith Kembedding entries (c1,...cK)of dimensions Rh, these fragments are converted into quantizedvectors b∈Rl×h.DQperforms a transformation on sby comparing(si)ti=1to all codebook entries and switches the snippet with theclosest codebook index. Hence, the process DQis defined as,ki=argmin cj∈C||si−cj|| (1)In the reverse quantization process to determine the latent embed-ding for each snippet, DQ’transforms the indices kinto the relevantentries bfrom codebook C. In the end, a decoder Dreconstructsbto the 3D space for human gestures. The general formulation ofthis autoencoder technique is:bg=D(DQ′(DQ(E(g)))) (2)This procedure is trained with an embedding loss to update thecodebook entries and stabilize training, and a reconstruction lossbetween gandbggiven by:Lvq=||bg−g||1+||sg[E(g)]−b||22+β||E(g)−sg[b]||22(3)sg[.] stands for the stop gradient operation in this context andβis a weighting factor. Since the quantization process DQis notdifferentiable, back-propagation was made possible by using thestraight-through gradient estimator [37].In our system, the encoder and decoder layers for the VQ-VAEmodel are a series of convolutional layers with skipped connec-tion, which are adapted from the recent work in image synthesis[11]. Since their original applications were 2D image synthesis,we changed the 2D convolutions layers into 1D to better fit thedata dimensions for the gesture motions. We use l=L/4in ourexperiments which gives us a sequence length lof 16.4.2 Diffusion for Discrete Gesture TokensThe discrete diffusion model and its continuous equivalent sharemany similarities. The forward diffusion process gradually corruptsthe sample through a Markov chain q(kt|kt−1), given a sequenceof discrete tokens k0∈Il, where the subscript denotes the diffusionstep. Following the discrete diffusion process [ 15], we employ theforward process to create progressively noisier latent variablesk1,..., kT∈Il, whereTrepresents the total number of diffusionsteps. In this discrete diffusion example, kTconsists of pure noiseor all masked tokens.The reverse diffusion process samples from the reverse distri-butionq(kt−1|kt,k0)in an attempt to reconstruct k0from kT. Toapproximate the reverse distribution, we train a transformer modelas the denoising model. The transformer model produces the distri-bution represented by the symbol pθ(kt−1|kt,y), where ydenotesthe condition (e.g., speech/text/interlocutor gestures or their com-bination).The transitional probabilities between codebook indices aredefined by fixed transition matrices Qt∈R(K+1)×(K+1)at eachtimestep. The matrix Qis given by,Qt=αt+βtβtβt... 0βtαt+βtβt... 0βtβtαt+βt... 0...............γtγtγt... 1(4)The [MASK] token is represented by the extra dimension inK+1. According to Qt, an index in kthas a probability of Kβtof being replaced by another index chosen randomly from the Kindices, with a probability γtof turning into a [MASK] index, anda probability of αtof staying the same index at each diffusion step.During training, the forward diffusion process becomes efficientby utilizing the closed-form equation [ 15] of the cumulative transi-tion matrixQt=Qt...Q 1, which expresses the transition probabil-ity from k0toktand the corresponding forward probability distri-butionq(kt|k0). Throughout the reverse process, the model learnsto approximate the posterior q(kt−1|kt,k0)withpθ(kt−1|kt,y), asmentioned earlier.To enhance generation results, recent efforts [ 4,18] utilize areparameterization approach, approximating the distribution ratherthan directly modeling the posterior. The denoising model producesdenoised gesture tokens given by pθ( ̃k0|kt,y). By using the de-noised token distribution pθ( ̃k0|kt,y)and the posterior distributionq(kt−1|kt, ̃k0), we sample the(t−1)-th gesture from pθ(kt−1|kt,y)during inference.The diffusion model is implemented as a transformer architecture[38] with 19 layers and 16 attention heads. We use 100 diffusionICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.Figure 1: Architecture for VQ-Diffusion model. The top half represents the VQ-VAE model framework. Bottom left figure brieflyshows the forward and reverse process of the training stage in Diffusion. Bottom right figure explains the inference stage withthe reparametrization trick.steps for our method and set the condition hidden dimension as512.4.3 Classifier-Free GuidanceThe diffusion model attempts to optimize the prior distributionp(k|y)during the training phase of a conditional generation taskusing kas a sample and yas the associated condition, providedthat the posterior distribution p(y|k)is satisfied. It’s probable thatthroughout training, this posterior probability will be disregarded.It is possible that the model merely uses the corrupted sample toreconstruct and ignores the conditional input because it has accessto both the corrupted sample and the condition. The posterior issue[34], or poor alignment between the generated sample and thecondition, results from this.Therefore, both p(k|y)andp(y|k)must be included in our opti-mization objective. One way to do this is to optimize logp(k|y)+slogp(y|k), where sdenotes the guidance scale which is a hyper-parameter. By using Bayes’ Theorem, this optimization functioncan be expressed as:argmax k=[logp(k)+(s+1)(logp(k|y)−logp(k))] (5)where p(k)is the unconditional distribution of k. To handle theunconditional inputs, the model is also trained with a ’null’ con-dition [ 26] for a select percentage of samples. It has been shownthat implementing a learnable conditional vector instead of a ’null’condition is more suitable for training classifier-free guidance [ 34].We adopt the technique with a learnable null vector in our im-plementation. Empirically, we found that using the classifier-freeguidance with a proper guidance scale improves the overall gesturesynthesis results.5 RESULTS AND DISCUSSION5.1 Implementations and ExperimentsWe chose to train VQ-VAE over 35k steps (120 epochs) on a batchsize of 256 which takes approximately 90 minutes to show properconvergence. The VQ-VAE model was trained with both the L2reconstruction loss and the codebook loss. In addition, we utilizedFréchet Gesture Distance (FGD) as the perceptual metric to evaluatewhether the reconstructed motions were statistically faithful to theoriginal motion styles. Figure 2 (Top row) shows the loss graphs fortraining the VQ-VAE, which demonstrates the method is capable oflearning the discrete representation and reconstructing the originalgestures. The VQ-VAE model shows good gesture reconstructioncapabilities as proven by the best validation FGD of 0.7. However,empirically we observed one peculiarity that using the VQ-VAEmodel with the best reconstruction FGD may produce worse resultswhen training the discrete diffusion model in the 2nd stage. Wesuspected this may be due to overfitting and thus chose a VQ-VAEcheckpoint with FGD of 1 for training the discrete diffusion model.For training the 2nd stage diffusion model, the KL divergenceloss was used since the diffusion is operated on the discrete la-bels. For selecting the best checkpoint, FGD was also used as theevaluation metric to reflect the motion quality of synthesized ges-tures. During training, the discrete diffusion model converged witha steady decrease in KL loss until the model started to overfit ataround 12K steps again on a batch size of 256. The FGD was alsoconverging smoothly without large fluctuations as shown in Figure2 (Bottom row). As seen in the plots, FGD continued to improvedespite the increase in validation loss. Therefore for stage 2, wepicked the checkpoint with the lowest FGD since it was observedDiscrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 2: Metric plots on the Genea2023 dataset training and validation. Top row shows the metrics for training and validatingof the VQ-VAE stage with training loss, validation loss and FGD from left to right. Bottom row shows the metrics for diffusionmodel trained and validated on the above VQ-VAE. Once, again with training loss, validation loss and FGD from left to right.empirically that the overfitted model with lower FGD resulted inbetter-looking gestures.5.2 Subjective EvaluationsThe user study and evaluations were conducted by the GENEA 2023organizers. The videos for the subjective evaluations were renderedfrom the gesture motion submissions from each team. Since thechallenge dataset is based on dyadic conversations between twoagents, three tasks were evaluated to properly assess different qual-ities for the generated gesture motions. The Human-likeness studymeasures the overall quality of the generated motions without fac-toring in the speech content. Appropriateness for agent speechstudy measures whether the synthesized gestures correspond wellto the input speech without considering the interlocutor. Finally, ap-propriateness for the interlocutor includes the dyadic interactionsto evaluate whether the interlocutor’s motions are proper giventhe conversations and the main agent’s motions. In the following,we further discuss the evaluation results for our system (SI).Figures 3, 4a, 4b show the subjective evaluations of various mod-els on the test dataset. Our model (SI) shows average performanceand ranks in the middle of all competing models. The average re-sult can be attributed to a few reasons. First, due to the efforts fordeveloping and tuning the VQ-diffusion model, we were not able toperform extensive experiments with all different input conditionswithin the timeline for the Challenge. Therefore the model has beenconditioned only on the audio of the main agent and interlocutorfor simplicity in the experiments. The possible improvement wouldbe including additional conditions such as the text transcript forbetter speech context, interlocutor gestures for more appropriatedyadic gestures and speaker identities for varying the gesture stylesof different speakers. A combination of these input features canbe fused with the audio features in a joint embedding space whichcould serve as a better conditional input for diffusion. AnotherHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 3: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95 % of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.reason for the average performance is that we have ignored synthe-sizing the finger joints when training our models, and focused onlyon producing the body and arm motions. Including these additionalfinger motions would likely enhance the details of the gestures andboost the overall motion quality in the subjective evaluations.Moreover, on inspection of our generated gestures visually, weobserved a jittering issue in some results. Specifically, sometimesthe synthesized gesture motions may produce abrupt movementsICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 4: Bar plots visualising the response distribution inthe appropriateness studies. The blue bar (bottom) repre-sents responses where subjects preferred the matched mo-tion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score.that look like noises and motion artifacts. Originally we thoughtthis was due to the singularity of the pose representation. However,the jittering still persisted after we switched to the 6-D rotationrepresentation. Therefore we speculated that the possible reason forthis effect could be due to the discrete nature of the representation.During the learning process, the discrete diffusion process mighthave predicted to shift between codebook indices representing twovery different gestures. Even though the VQ-VAE decoder shouldalleviate the discontinuous motions, this may still lead to suddenspeed changes in the gesture being performed and reduces theoverall smoothness of the produced motion. Resolving this issuerequires a deeper investigation into the diffusion model training tounderstand the cause. Some heuristics could also be implementedto prevent sampling the subsequent gesture tokens that are too faraway in the motion space.While we believe the proposed architecture of discrete condi-tional diffusion is a promising method, a significant disadvantageto this method is having to train two different models. It requirestraining both the VQ-VAE model for learning the discrete latentcodes and the discrete diffusion model for learning the conditionalinference. Thus the performance of the diffusion model dependsheavily on the quality of VQ-VAE and slight variance in VQ-VAE canlead to significant performance differences in the final performance.In our experiment, we found that the codebook size of the VQ-VAE is also an important factor and it is easy to overfit if a largecodebook size is chosen. For example, using a codebook size of 1024produces worse results than a codebook size of 256, which was usedin our final model. Another hyperparameter requires tuning in theguidance scale in the diffusion process. The final quantitative resultsvary significantly on the guidance scale. We found a guidance scaleof 4 to give the best results.6 CONCLUSIONS AND TAKEAWAYSIn this paper, we describe the gesture synthesis method of our sub-mission entry to GENEA Challenge 2023 [ 21]. Overall, the discretediffusion method is able to leverage the generative strength of thediffusion process while reducing the inference time compared torunning the diffusion on the full motion poses. However, the userstudy results showed that there is still room for improvement inour proposed system. In the future, we plan to address the issues ofjittering artifacts and finger motions to improve the overall motionquality. We also hope to experiment with additional input condi-tions to produce proper motions in dyadic scenarios. We believe themethod requires more refinements and could be a promising direc-tion for generating stylized gestures using various input conditionssuch as audio, text, and speaker identities once these drawbacksare addressed.7 ACKNOWLEDGMENTThis work is supported by University Affiliated Research Center(UARC) award W911NF-14-D-0005. Statements and opinions ex-pressed and content included do not necessarily reflect the positionor the policy of the Government, and no official endorsement shouldbe inferred.
V1aEk7bt-Jg
This paper describes a method for diverse co-speech gesture synthesis using discrete diffusion process. The authors have discussed the benefits of using discrete diffusion process for this task and gives a reasonable explanation of their design choices and experiments. Overall the work is technially sound, the method is well organized and experiments are clearly discussed.
6: Marginally above acceptance threshold
The paper is well-written, the sections are well-organized, and the contribution is clear. The paper is technically sound. The authors have motivated their reason for using discrete diffusion for such a task and discussed the strength and current limitations of their method. A few points to consider: 1. Details on the design choices for discrete diffusion should be included. Do the authors use the D3PM as it is for this task? 2. Ablation without the classifier-free guidance should be included to provide a justification of how a proper guidance scale improves the overall gesture synthesis results. 3. Video results would have been beneficial to judge the visual quality of the method.
3: The reviewer is fairly confident that the evaluation is correct
xPQcKA56N4j
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Discrete Diffusion for Co-Speech Gesture Synthesis
["Ankur Chemburkar", "Shuhong Lu", "Andrew Feng"]
In this paper, we describe the gesture synthesis system we developed for our entry to the GENEA Challenge 2023. One challenge in learning the co-speech gesture model is that there may be multiple viable gesture motions for the same speech utterance. Therefore compared to a deterministic regression model, a probabilistic model will be preferred to handle the one-to-many mapping problem. Our system utilizes the vector-quantized variational autoencoder (VQ-VAE) and discrete diffusion as the framework for predicting co-speech gestures. Since the gesture motions are produced via sampling the discrete gesture tokens using the discrete diffusion process, the method is able to produce diverse gestures given the same speech input. Based on the user evaluation results, we further discuss about the strength and limitations of our system, and provide the lessons learned when developing and tuning the system. The subjective evaluation results show that our method ranks in the middle for human-likeness among all submitted entries. In the the speech appropriateness evaluations, our method has preferences of 55.4% for matched agent gesture and 51.1% for matched interlocutor gestures. Overall, we demonstrated the potential of discrete diffusion models in gesture generation.
["gesture synthesis", "computer animation", "neural networks"]
ABSTRACTIn this paper, we describe the gesture synthesis system we devel-oped for our entry to the GENEA Challenge 2023. One challenge inlearning the co-speech gesture model is that there may be multipleviable gesture motions for the same speech utterance. Thereforecompared to a deterministic regression model, a probabilistic modelwill be preferred to handle the one-to-many mapping problem.Our system utilizes the vector-quantized variational autoencoder(VQ-VAE) and discrete diffusion as the framework for predictingco-speech gestures. Since the gesture motions are produced viasampling the discrete gesture tokens using the discrete diffusionprocess, the method is able to produce diverse gestures given thesame speech input. Based on the user evaluation results, we furtherdiscuss about the strength and limitations of our system, and pro-vide the lessons learned when developing and tuning the system.The subjective evaluation results show that our method ranks inthe middle for human-likeness among all submitted entries. In thethe speech appropriateness evaluations, our method has prefer-ences of 55.4% for matched agent gesture and 51.1% for matchedinterlocutor gestures. Overall, we demonstrated the potential ofdiscrete diffusion models in gesture generation.CCS CONCEPTS•Computing methodologies →Intelligent agents ;Animation ;Neural networks .KEYWORDSgesture synthesis, computer animation, neural networksACM Reference Format:Ankur Chemburkar, Shuhong Lu, and Andrew Feng. 2023. Discrete Diffusionfor Co-Speech Gesture Synthesis. In INTERNATIONAL CONFERENCE ONMULTIMODAL INTERACTION (ICMI ’23 Companion), October 9–13, 2023,Paris, France. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3610661.3616556Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.36165561 INTRODUCTIONCo-speech gesture synthesis is an important capability for drivingvirtual character movements in conversational interactions withhuman users. It plays an essential role in augmenting the virtualhuman with non-verbal behaviors that mimic actual human commu-nications in addition to speech lip-syncing animations. However, itis not trivial to synthesize gesture motions that are both human-likeand correspond well to the speech input.In general, the process of gesture generation from speech tomotion is a non-deterministic one-to-many mapping, which indi-cates that multiple gestures could correspond to the same speechinput to convey a similar meaning. For example, a left-hand beat, aright-hand beat, or a beat involving hands will all be appropriaterepresentations of a beat motion corresponding to an utterance.Therefore instead of using deterministic models [ 13,40,41] to pre-dict gestures, the recent methods utilized the probablistic frame-works [ 2,23] by sampling the latent space to accommodate thenon-deterministic natures of gesture synthesis.For the GENEA challenge [ 21], we have developed our gesturesynthesis system based on vector-quantized variational autoen-coder (VQ-VAE) and denoising diffusion probabilistic models. Weassume that by utilizing the discrete tokens, the gesture synthesisproblem could be regarded as token sampling based on the pre-dicted logits. This allows gestures that are far apart in the motionspace to be still mapped to the same input utterance. By leveragingthe disentanglement of information in the latent space of VQ-VAE,the system gains the potential for controllable gesture synthesis.The diffusion methods have been adapted successfully for variousapplications including image and motion synthesis [ 10,35,44]. Themotivation for our system is to utilize these recent developments ingenerative models for gesture synthesis. One more insight for em-ploying the diffusion process is that diffusion models are inherentlyrobust to noise and uncertainty in the data. We aim to reduce jit-tering results generated by many previous methods. Diffusion caneffectively denoise corrupted inputs by stepping backward throughthe diffusion process, aiding in data recovery and reconstructiontasks. Specifically, we first learn the discrete latent codes from theinput motions using VQ-VAE. These codes are then used by thediscrete denoising diffusion probabilistic models (D3PM) to learnthe denoise process. By learning the denoising model in the discretelatent space, the method is able to leverage the synthesis strengthfrom the diffusion process while also greatly reducing the compu-tational costs by requiring much fewer diffusion steps to converge.After predicting the discrete codes, the model then reconstructsICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.the gesture motions through the decoder of VQ-VAE. From thesynthesis results, we found that the method is able to produce di-verse gestures with good motion dynamics. A demonstration videoshowcasing our results can be accessed by visiting the providedlink: here."2 BACKGROUND2.1 Co-Speech Gesture SynthesisIn the realm of speech gesture synthesis, traditional rule-basedapproaches have relied on manually created sets of gesture units,employing predefined rules and heuristics to generate gesturesbased on linguistic and contextual information [ 5,19,25]. Someapproaches have attempted to extract gesture units from train-ing speech-gesture pairs [ 12,16]. However, these methods havestruggled in accurately estimating gesture attributes and effectivelyforming units, thereby impacting the final quality of results.In contrast, learning-based approaches have emerged, whereincertain methods utilize speech-gesture pair data to train end-to-end models that directly predict co-speech gestures, treating thetask as a regression problem from speech to gestures [ 6,14,20,40].However, a significant challenge arises when a single speech inputcorresponds to multiple variants of gestures, as the regression modeltends to average the gesture poses, resulting in inferior outcomes.This challenge is commonly referred to as the one-to-many mappingfrom speech to gestures issue.Recent advancements have approached gesture synthesis in aprobabilistic framework, enabling the generation of multiple ges-ture sequences from a single speech input through latent spacesampling [ 1,2,7,23,24,27]. Nonetheless, as the length of thesequence increases, the process of generating data sequentiallybecomes time-consuming, and the dependency information is lostas each element relies on the previously generated ones [29].Based on the aforementioned points, we propose our model thatcombines the VQ-VAE and diffusion techniques to tackle thesechallenges and enhance the synthesis of speech gestures.2.2 Discrete Latent Space LearningA VAE (Variational Autoencoder) is a type of generative model thatlearns a compressed representation of input data by mapping it toa lower-dimensional latent space, typically modeled as a Gaussiandistribution, using an encoder. In the case of VQ-VAE, the latentspace is discretized into a finite set of codebooks [ 36]. This allowsfor the encoding of original gestures into small, trainable data unitsusing vector quantization. Recent model design and training tech-niques have been focusing on improvements for learning the latentspace reconstructions. For instance, Jukebox [ 9] trained separateVQ-VAEs on data with different resolutions by hierarchically down-sampling the input data. RQ-VAE [ 30] reduces the reconstructionerrors by recursively quantizing the feature maps using a fixed-sizecodebook.One known issue in VQ-VAE is codebook collapse [ 30], wheremultiple embeddings in the codebook collapse and become identicalor nearly identical during training. This collapse leads to a loss ofdiversity in learned representations and can adversely affect modelperformance and generation quality. Several techniques have beenproposed to mitigate codebook collapse, including re-initializingunused codes to random vectors during each training iteration [ 9],normalizing mean squared error (MSE) for reconstruction [ 39], andupdating codebook embeddings with exponential moving averages[30].VQ-VAE method typically utilizes autoregressive transformersto learn a probability distribution over the latent space during thegenerative stage. However, autoregressive models often strugglewith capturing long-range dependencies in the data, as each el-ement’s conditioning is limited to the previous elements. In thiswork, we instead applied discrete diffusion to enlarge the samplingwindow size without negatively affecting the performance of thegenerated sequences.2.3 Denoising Diffusion Probabilistic ModelsDiffusion models have emerged as a prominent approach in imagesynthesis and motion generation, showcasing their ability to gen-erate complex and realistic results. In contrast to autoregressivegenerative models, diffusion models provide greater flexibility withreduced error accumulation during inference and are well-suitedfor parallel training since they are not constrained by step-by-stepsampling [10, 17, 31–33].In the continuous diffusion process, the target data array, suchas gesture motions in our case, undergoes an iterative injectionof Gaussian noise through a forward Markov process until purenoise is obtained. In the subsequent reverse process, the modellearns to gradually denoise the sample. The diffusion transformerframework has found application in motion synthesis domains,including tasks like audio-conditioned gesture generation [ 43] thatcan effectively handle long-term dependencies in gesture sequences.Several notable adaptations of diffusion models have been made forhuman motion synthesis as well, such as generating raw motionframes [ 35] and improving jittering problems through time-varyingweight schedules for noise estimation [ 8]. In the realm of gesturesynthesis, Ao et al. [ 3] leverage a latent diffusion model and apply aContrastive-Language-Image-Pretraining strategy [ 28] to learn therelationship between speech transcripts and gestures. Additionally,Zhu et al. [ 46] focus on ensuring temporal coherence by tailoringtheir Diffusion Co-Speech Gesture framework in the context ofgesture synthesis.Diffusion models can also be extended to discrete data, includingcategorical labels or text. For example, D3PM [ 4] utilizes a transitionmatrix in the noising step to handle discrete data. Another variant,the VQ-Diffusion model [ 15], combines a VQ-VAE with a conditionalDDPM variant to model the latent space for text-to-image synthesis.In our system, we adapted the discrete diffusion model to producegesture token sequences based on input conditions.3 DATA PRE-PROCESSINGThe training data for the GENEA Challenge 2023 is based on asubset of the Talking with Hands (TWH) dataset [ 22]. The datasetincludes the entirety of dyadic interactions, with audio and speechtext features from both the main agent and interlocutor.In accordance with [ 42], we undertook analogous data prepro-cessing procedures.For input gesture representation, we first down-sampled the input motions to 30 fps and applied a sliding window of64 frames with a step size of 10 frames to produce gesture samples.Discrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, FranceEach gesture sample is converted into a tensor of size T×J×D,whereT=64is the sliding window size, Jis the number of joints,andDis the size for joint rotation representation.We also use D=6as the representation for joint rotations basedon previous research [ 45] to prevent singularities and reduce ro-tation approximation errors. The pose dimension we used is 153,which includes 6D rotation vectors for 25 joints and the root transla-tion. For each gesture sample, our target is to predict the main agentposes, and we combine the audio features from both the main agentand interlocutor as the input conditions to our model. Followingthe baseline data processing scripts provided by the organizers, theaudio features include Mel-frequency cepstral coefficients (MFCCs),spectrogram, and speech prosody. We concatenate all three featuresfor both agents into the final speech audio features.4 METHODThe method implemented in our system uses a two-stage architec-ture to train the gesture synthesis models; the first stage involveslearning discrete tokens using VQ-VAE, while the second stagemakes use of the discrete diffusion process to learn conditionaltoken distributions. Figure 1 presents a summary of our approachbased on discrete diffusion.4.1 Discrete Gesture Token LearningWe employ a latent space vector quantization model that has beenspecially trained on the realm of three-dimensional human gestures.When given a human gesture represented by a sequence of posesg∈RL×Dg, where Ldenotes the length of the gesture sequence andDgdenotes the dimensions of a single gesture frame, an encoderEconverts these frames into gesture tokens or snippets s∈Rl×h,where ldenotes a number significantly less than Landhdenotesthe latent dimension. Then, using a discrete quantization techniqueDQand a learned codebook Cwith Kembedding entries (c1,...cK)of dimensions Rh, these fragments are converted into quantizedvectors b∈Rl×h.DQperforms a transformation on sby comparing(si)ti=1to all codebook entries and switches the snippet with theclosest codebook index. Hence, the process DQis defined as,ki=argmin cj∈C||si−cj|| (1)In the reverse quantization process to determine the latent embed-ding for each snippet, DQ’transforms the indices kinto the relevantentries bfrom codebook C. In the end, a decoder Dreconstructsbto the 3D space for human gestures. The general formulation ofthis autoencoder technique is:bg=D(DQ′(DQ(E(g)))) (2)This procedure is trained with an embedding loss to update thecodebook entries and stabilize training, and a reconstruction lossbetween gandbggiven by:Lvq=||bg−g||1+||sg[E(g)]−b||22+β||E(g)−sg[b]||22(3)sg[.] stands for the stop gradient operation in this context andβis a weighting factor. Since the quantization process DQis notdifferentiable, back-propagation was made possible by using thestraight-through gradient estimator [37].In our system, the encoder and decoder layers for the VQ-VAEmodel are a series of convolutional layers with skipped connec-tion, which are adapted from the recent work in image synthesis[11]. Since their original applications were 2D image synthesis,we changed the 2D convolutions layers into 1D to better fit thedata dimensions for the gesture motions. We use l=L/4in ourexperiments which gives us a sequence length lof 16.4.2 Diffusion for Discrete Gesture TokensThe discrete diffusion model and its continuous equivalent sharemany similarities. The forward diffusion process gradually corruptsthe sample through a Markov chain q(kt|kt−1), given a sequenceof discrete tokens k0∈Il, where the subscript denotes the diffusionstep. Following the discrete diffusion process [ 15], we employ theforward process to create progressively noisier latent variablesk1,..., kT∈Il, whereTrepresents the total number of diffusionsteps. In this discrete diffusion example, kTconsists of pure noiseor all masked tokens.The reverse diffusion process samples from the reverse distri-butionq(kt−1|kt,k0)in an attempt to reconstruct k0from kT. Toapproximate the reverse distribution, we train a transformer modelas the denoising model. The transformer model produces the distri-bution represented by the symbol pθ(kt−1|kt,y), where ydenotesthe condition (e.g., speech/text/interlocutor gestures or their com-bination).The transitional probabilities between codebook indices aredefined by fixed transition matrices Qt∈R(K+1)×(K+1)at eachtimestep. The matrix Qis given by,Qt=αt+βtβtβt... 0βtαt+βtβt... 0βtβtαt+βt... 0...............γtγtγt... 1(4)The [MASK] token is represented by the extra dimension inK+1. According to Qt, an index in kthas a probability of Kβtof being replaced by another index chosen randomly from the Kindices, with a probability γtof turning into a [MASK] index, anda probability of αtof staying the same index at each diffusion step.During training, the forward diffusion process becomes efficientby utilizing the closed-form equation [ 15] of the cumulative transi-tion matrixQt=Qt...Q 1, which expresses the transition probabil-ity from k0toktand the corresponding forward probability distri-butionq(kt|k0). Throughout the reverse process, the model learnsto approximate the posterior q(kt−1|kt,k0)withpθ(kt−1|kt,y), asmentioned earlier.To enhance generation results, recent efforts [ 4,18] utilize areparameterization approach, approximating the distribution ratherthan directly modeling the posterior. The denoising model producesdenoised gesture tokens given by pθ( ̃k0|kt,y). By using the de-noised token distribution pθ( ̃k0|kt,y)and the posterior distributionq(kt−1|kt, ̃k0), we sample the(t−1)-th gesture from pθ(kt−1|kt,y)during inference.The diffusion model is implemented as a transformer architecture[38] with 19 layers and 16 attention heads. We use 100 diffusionICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.Figure 1: Architecture for VQ-Diffusion model. The top half represents the VQ-VAE model framework. Bottom left figure brieflyshows the forward and reverse process of the training stage in Diffusion. Bottom right figure explains the inference stage withthe reparametrization trick.steps for our method and set the condition hidden dimension as512.4.3 Classifier-Free GuidanceThe diffusion model attempts to optimize the prior distributionp(k|y)during the training phase of a conditional generation taskusing kas a sample and yas the associated condition, providedthat the posterior distribution p(y|k)is satisfied. It’s probable thatthroughout training, this posterior probability will be disregarded.It is possible that the model merely uses the corrupted sample toreconstruct and ignores the conditional input because it has accessto both the corrupted sample and the condition. The posterior issue[34], or poor alignment between the generated sample and thecondition, results from this.Therefore, both p(k|y)andp(y|k)must be included in our opti-mization objective. One way to do this is to optimize logp(k|y)+slogp(y|k), where sdenotes the guidance scale which is a hyper-parameter. By using Bayes’ Theorem, this optimization functioncan be expressed as:argmax k=[logp(k)+(s+1)(logp(k|y)−logp(k))] (5)where p(k)is the unconditional distribution of k. To handle theunconditional inputs, the model is also trained with a ’null’ con-dition [ 26] for a select percentage of samples. It has been shownthat implementing a learnable conditional vector instead of a ’null’condition is more suitable for training classifier-free guidance [ 34].We adopt the technique with a learnable null vector in our im-plementation. Empirically, we found that using the classifier-freeguidance with a proper guidance scale improves the overall gesturesynthesis results.5 RESULTS AND DISCUSSION5.1 Implementations and ExperimentsWe chose to train VQ-VAE over 35k steps (120 epochs) on a batchsize of 256 which takes approximately 90 minutes to show properconvergence. The VQ-VAE model was trained with both the L2reconstruction loss and the codebook loss. In addition, we utilizedFréchet Gesture Distance (FGD) as the perceptual metric to evaluatewhether the reconstructed motions were statistically faithful to theoriginal motion styles. Figure 2 (Top row) shows the loss graphs fortraining the VQ-VAE, which demonstrates the method is capable oflearning the discrete representation and reconstructing the originalgestures. The VQ-VAE model shows good gesture reconstructioncapabilities as proven by the best validation FGD of 0.7. However,empirically we observed one peculiarity that using the VQ-VAEmodel with the best reconstruction FGD may produce worse resultswhen training the discrete diffusion model in the 2nd stage. Wesuspected this may be due to overfitting and thus chose a VQ-VAEcheckpoint with FGD of 1 for training the discrete diffusion model.For training the 2nd stage diffusion model, the KL divergenceloss was used since the diffusion is operated on the discrete la-bels. For selecting the best checkpoint, FGD was also used as theevaluation metric to reflect the motion quality of synthesized ges-tures. During training, the discrete diffusion model converged witha steady decrease in KL loss until the model started to overfit ataround 12K steps again on a batch size of 256. The FGD was alsoconverging smoothly without large fluctuations as shown in Figure2 (Bottom row). As seen in the plots, FGD continued to improvedespite the increase in validation loss. Therefore for stage 2, wepicked the checkpoint with the lowest FGD since it was observedDiscrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 2: Metric plots on the Genea2023 dataset training and validation. Top row shows the metrics for training and validatingof the VQ-VAE stage with training loss, validation loss and FGD from left to right. Bottom row shows the metrics for diffusionmodel trained and validated on the above VQ-VAE. Once, again with training loss, validation loss and FGD from left to right.empirically that the overfitted model with lower FGD resulted inbetter-looking gestures.5.2 Subjective EvaluationsThe user study and evaluations were conducted by the GENEA 2023organizers. The videos for the subjective evaluations were renderedfrom the gesture motion submissions from each team. Since thechallenge dataset is based on dyadic conversations between twoagents, three tasks were evaluated to properly assess different qual-ities for the generated gesture motions. The Human-likeness studymeasures the overall quality of the generated motions without fac-toring in the speech content. Appropriateness for agent speechstudy measures whether the synthesized gestures correspond wellto the input speech without considering the interlocutor. Finally, ap-propriateness for the interlocutor includes the dyadic interactionsto evaluate whether the interlocutor’s motions are proper giventhe conversations and the main agent’s motions. In the following,we further discuss the evaluation results for our system (SI).Figures 3, 4a, 4b show the subjective evaluations of various mod-els on the test dataset. Our model (SI) shows average performanceand ranks in the middle of all competing models. The average re-sult can be attributed to a few reasons. First, due to the efforts fordeveloping and tuning the VQ-diffusion model, we were not able toperform extensive experiments with all different input conditionswithin the timeline for the Challenge. Therefore the model has beenconditioned only on the audio of the main agent and interlocutorfor simplicity in the experiments. The possible improvement wouldbe including additional conditions such as the text transcript forbetter speech context, interlocutor gestures for more appropriatedyadic gestures and speaker identities for varying the gesture stylesof different speakers. A combination of these input features canbe fused with the audio features in a joint embedding space whichcould serve as a better conditional input for diffusion. AnotherHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 3: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95 % of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.reason for the average performance is that we have ignored synthe-sizing the finger joints when training our models, and focused onlyon producing the body and arm motions. Including these additionalfinger motions would likely enhance the details of the gestures andboost the overall motion quality in the subjective evaluations.Moreover, on inspection of our generated gestures visually, weobserved a jittering issue in some results. Specifically, sometimesthe synthesized gesture motions may produce abrupt movementsICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 4: Bar plots visualising the response distribution inthe appropriateness studies. The blue bar (bottom) repre-sents responses where subjects preferred the matched mo-tion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score.that look like noises and motion artifacts. Originally we thoughtthis was due to the singularity of the pose representation. However,the jittering still persisted after we switched to the 6-D rotationrepresentation. Therefore we speculated that the possible reason forthis effect could be due to the discrete nature of the representation.During the learning process, the discrete diffusion process mighthave predicted to shift between codebook indices representing twovery different gestures. Even though the VQ-VAE decoder shouldalleviate the discontinuous motions, this may still lead to suddenspeed changes in the gesture being performed and reduces theoverall smoothness of the produced motion. Resolving this issuerequires a deeper investigation into the diffusion model training tounderstand the cause. Some heuristics could also be implementedto prevent sampling the subsequent gesture tokens that are too faraway in the motion space.While we believe the proposed architecture of discrete condi-tional diffusion is a promising method, a significant disadvantageto this method is having to train two different models. It requirestraining both the VQ-VAE model for learning the discrete latentcodes and the discrete diffusion model for learning the conditionalinference. Thus the performance of the diffusion model dependsheavily on the quality of VQ-VAE and slight variance in VQ-VAE canlead to significant performance differences in the final performance.In our experiment, we found that the codebook size of the VQ-VAE is also an important factor and it is easy to overfit if a largecodebook size is chosen. For example, using a codebook size of 1024produces worse results than a codebook size of 256, which was usedin our final model. Another hyperparameter requires tuning in theguidance scale in the diffusion process. The final quantitative resultsvary significantly on the guidance scale. We found a guidance scaleof 4 to give the best results.6 CONCLUSIONS AND TAKEAWAYSIn this paper, we describe the gesture synthesis method of our sub-mission entry to GENEA Challenge 2023 [ 21]. Overall, the discretediffusion method is able to leverage the generative strength of thediffusion process while reducing the inference time compared torunning the diffusion on the full motion poses. However, the userstudy results showed that there is still room for improvement inour proposed system. In the future, we plan to address the issues ofjittering artifacts and finger motions to improve the overall motionquality. We also hope to experiment with additional input condi-tions to produce proper motions in dyadic scenarios. We believe themethod requires more refinements and could be a promising direc-tion for generating stylized gestures using various input conditionssuch as audio, text, and speaker identities once these drawbacksare addressed.7 ACKNOWLEDGMENTThis work is supported by University Affiliated Research Center(UARC) award W911NF-14-D-0005. Statements and opinions ex-pressed and content included do not necessarily reflect the positionor the policy of the Government, and no official endorsement shouldbe inferred.
nN_dw9rj0a
Overall this paper describes the adequate details of their submitted system, including data processing, methods, and training details. However, minor revision is needed to improve the paper quality.
6: Marginally above acceptance threshold
1. It is strongly advised that the key findings and results be summarized in the abstract (1~2 sentences). How does this system perform in the challenge? 2. I would appreciate the effort put into the descriptions of methods and implementation details. But I think the readers might be confused about the inputs/outputs of the system, by looking at Fig. 1. Audio processing is described in Section 3, but are the audio features used as input in the architecture? 3. The baseline method should be cited, as it is used for data processing. 4. There are some minor errors in this paper. I would suggest that authors do a round of meticulous checks for revision. Line 28: "is __an__ non-deterministic one-to-many mapping" Line 38: "the predicted __the__ logits." 5. I would love to see more insights from this submission entry. For example, are there any other reasons than one2many mapping why VQVAE+Diffusion is chosen? or how is the result compared with the baseline?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
vD3_u_kbkqS
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Diffusion-based co-speech gesture generation using joint text and audio representation
["Anna Deichler", "Shivam Mehta", "Simon Alexanderson", "Jonas Beskow"]
This paper describes a system developed for the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge 2023. Our solution builds on an existing diffusion-based motion synthesis model. We propose a contrastive speech and motion pretraining (CSMP) module, which learns joint embeddings for speech and gestures with the aim to learn a semantic coupling between these modalities. The output of the CSMP module is used as a conditioning signal in the diffusion-based gesture synthesis model in order to achieve semantically-aware co-speech gesture generation. Our entry achieved highest human-likeness and highest speech appropriateness rating among the submitted entries. This indicates that our system is a promising approach to achieve human-like co-speech gestures in agents that carry semantic meaning.
["gesture generation", "semantic gestures", "motion synthesis", "diffusion models", "contrastive pre-training"]
ABSTRACTThis paper describes a system developed for the GENEA (Genera-tion and Evaluation of Non-verbal Behaviour for Embodied Agents)Challenge 2023. Our solution builds on an existing diffusion-basedmotion synthesis model. We propose a contrastive speech and mo-tion pretraining (CSMP) module, which learns a joint embeddingfor speech and gesture with the aim to learn a semantic couplingbetween these modalities. The output of the CSMP module is usedas a conditioning signal in the diffusion-based gesture synthesismodel in order to achieve semantically-aware co-speech gesturegeneration. Our entry achieved highest human-likeness and high-est speech appropriateness rating among the submitted entries.This indicates that our system is a promising approach to achievehuman-like co-speech gestures in agents that carry semantic mean-ing.KEYWORDSgesture generation, motion synthesis, diffusion models, contrastivepre-training, semantic gesturesACM Reference Format:Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow. 2023.Diffusion-Based Co-Speech Gesture Generation Using Joint Text and AudioRepresentation. In INTERNATIONAL CONFERENCE ON MULTIMODAL IN-TERACTION (ICMI ’23), October 09–13, 2023, Paris, France. ACM, New York,NY, USA, 8 pages. https://doi.org/10.1145/3577190.36161171 INTRODUCTIONHuman communication is inherently multimodal involving the in-tegration of multiple verbal and non-verbal modalities to conveythe information. These modalities work in synergy, collaborating tocreate a joint representation of the message the speaker intends toconvey [ 29]. In addition to complementing verbal communication,these non-verbal gestures frequently serve as substitutes for wordsThis work is licensed under a Creative Commons Attribution International4.0 License.ICMI ’23, October 09–13, 2023, Paris, France©2023 Copyright held by the owner/author(s).ACM ISBN 979-8-4007-0055-2/23/10.https://doi.org/10.1145/3577190.3616117[9,31]. The semantic meaning contribution of gestures is multi-faceted. Beat gestures primarily emphasize the verbally expressedcontent, serving to accentuate the spoken message. On the otherhand, iconic and pointing gestures go beyond emphasizing content;they directly represent or indicate the referent being discussed.Deictic pointing gestures, often accompanying deictic words, play acrucial role in referential communication by providing vital contex-tual information for reference disambiguation, while iconic gesturesserve to visually represent or symbolize the attributes, actions, orcharacteristics associated with the referent.Co-speech gesture generation in robotics and avatars focuses ongenerating gestures that accompany and extend the verbal modal-ity. However, the generation of audio-driven motion has posed asignificant challenge. This difficulty arises from the fact that suchmotion can be accurately predicted by very strong probabilisticmodels, since gestures exhibit high individual variability, are inher-ently non-deterministic [ 2]. Recent advances in learning arbitraryprobability distributions with diffusion models has offered a wayto tackle this problem. These audio-driven gesture generation mod-els have proven to be efficient in reproducing the high variabilityand expressivity of human gestures, however integrating seman-tic content into gesture generation by combining audio and textconditioning is another challenge.Self-supervised pre-training methods have proven to be an ef-ficient way to learn useful representations for downstream tasks,especially in case of limited labeled data. Multi-modal pre-trainingmethods learn embedding spaces that encode useful relations ofdifferent data modalities. Contrastive Language-Image Pre-Training(CLIP) [ 32] is a contrastive multi-modal pre-training method thatlearns a joint representation of image and text data by contrastingpositive and negative text-image pair examples in the latent spaceduring training. This training approach encourage the model tocapture the underlying relationship between the two modalities.The problem of co-speech gesture generation involves multiplemodalities, with a tight coupling between motion, text and audio.This work aims at combining the expressivity of diffusion basedmotion synthesis [ 2] with the multi-modal understanding of a CLIP-like latent embedding space that models the relations betweenmotion, text and audio in co-speech gestures.ICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow2 RELATED WORK2.1 Co-speech gesture generationThe primary goal of co-speech gesture generation is to synthesisenatural and contextually appropriate gestures. In the early stagesof gesture generation research, various rule-based approaches wereemployed [ 5,26,27], where the generation of gestures was triggeredby predefined rules that initiated the playback of pre-recordedgestures. In recent years, this field has been dominated by the useof data-driven deep learning based modelling methodologies [31].Early works on deep learning-based gesture synthesis treatedit as a regression problem and utilised recurrent [ 14,36] and con-volutional [ 21] neural networks to model the generation process.Treating gesture synthesis as a regression problem leads to the prob-lem of under-articulated and over-smoothened gestures becauseof averaging over all the possible outcomes for an input signal. Toaddress the challenge of under-articulated and over-smoothenedsynthesis researchers employed various probabilistic modellingtechniques such as VAEs [ 12], VQ-VAEs [ 43], Normalising Flows[1] or adversarial techniques like GANs [ 41,42]. These methodolo-gies aim to enhance the realism and expressiveness of the generatedgestures by learning a distribution over the entire utterances andsampling different realisations from it or learning powerful transfor-mations from a simple distribution, usually a Gaussian distribution,to the output motion distribution.Diffusion models [ 15,34,35] have emerged as a notable and con-temporary probabilistic generative modelling methodology. Thesemodels have shown promise in capturing complex data distribu-tions and have gained attention in various fields, including gesturegeneration [ 2,3,30,45]. Inspired by these works our system usesDenoising Diffusion Probabilistic Modelling (DDPM) [ 15] formu-lation with self-supervised representations to synthesise gesturesconditioned on the input audio.2.2 Semantic gesture generationIn order to generate contextually appropriate gestures in agents, itis crucial to take into account gesture semantics. Semantic gestureshave a symbolic representational quality and contribute to theoverall meaning in communication. The generation of semanticgestures is highly reliant on which input modalities are taken intoaccount in the modeling process [31].Audio driven generation can reproduce the coupling betweengesture kinematics and the intonation, stress and rhythm present inthe audio signal. These systems are good at modeling beat gestures,which can help highlight important points or add emphasis tocertain words or phrases [ 28],[1],[2]. However, in order to generaterepresentational gestures (e.g., iconic, deictic pointing), additionalinput modalities are needed. Text-based conditioning is essentialto model the relation between semantic and kinematic spaces inorder to generate iconic gestures [ 44],[22], while the generation ofdeictic pointing gestures needs referential target information [ 10].In this work we develop a novel approach to jointly model audioand text conditioning in gesture generation through a contrastiveself-supervised learning approach in order to extend the existingaudio conditioned system with semantic capabilities.2.3 Using language based pre-trainingapproaches in motion generationRecent works approaches have leveraged different pre-training ap-proaches to learn the semantic coupling between text and motionspaces. [ 46] uses a GPT-like module to generate code indices basedon text embeddings which are utilized by a VQ-VAE module inmotion generation, while [ 17] proposes MotionGPT, which per-forms language modeling on both motion and text in a unifiedmanner, treating human motion as a specific language. Previouswork has also leveraged CLIP’s multimodal understanding to gener-ate meaningful motion. [ 37] develops an auto-encoder based motiongeneration model, which learns a motion embedding space alignedwith CLIP’s latent space, which allows for the generation of ex-pressive and versatile text-based motions. [ 38] uses CLIP latents asconditioning information in diffusion based human motion genera-tion. Similarly, [ 8] conditions on CLIP latents, but combines latentspace based and diffusion based motion generation. Most similar toour work is [ 3], which learns a gesture-text joint embedding usingcontrastive learning and a CLIP based style encoding module in adiffusion based gesture synthesis model.3 METHOD3.1 Self-supervised representations of text andaudioWe employ pre-trained self-supervised representations for text andaudio for both the main agent and the interlocutor. Data2vec [ 4]which is a framework for self-supervised representation learningon data of different modalities (text, audio and images), for whichpre-trained models are available1. Data2vec leverages transformerarchitecture in a self-distillation setup to achieve contextual textembedding, predicting latent representations of the full input databased on a masked view of the input.For audio, we use the data2vec-audio-base-960h model, whichtakes one-channel 16 Khz audio as input. As output we use the lasthidden layer, which gives us a sequence of 768-dimensional embed-ding vectors at a rate of 50 Hz. The output is then converted to 30Hz using polyphase resampling ( scipy.signal.resample_poly )in order to match the frame rate of the motion data.For text, we use the data2vec-text-base model. Input to themodel is a sequence of byte-pair encoded text tokens. Just as for theaudio, we use the last hidden layer of the data2vec model to obtaina 768-dimensional vector for each input token. We use the wordtimed transcriptions provided in the dataset (see [ 23]) to maintaina start and end time for each token, then we replicate the outputvector at a rate of 30 Hz for the duration of the token, The result isa text-embedding sequence that is aligned with, and of the samelength as, the audio and motion data sequences.3.2 Join representation with Contrastive Speechand Motion Pretraining (CSMP)Contrastive pre-training can effectively capture the semantic rela-tionships between two modalities but usually, it requires a largerbatch size and larger dataset to learn efficient joint representations[7] which can be challenging especially in this case because of1huggingface.coDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, Francedataset-specific properties [ 23] such as the presence of an interlocu-tor and the skeletal nodes of the characters. In such a case havingrepresentations which already capture semantic information can beused as the inputs to the CLIP module. Therefore, we devise a varia-tion of CLIP and call it Contrastive Speech and Motion Pretraining(CSMP).In CSMP, we propose several modifications to the original CLIParchitecture within the context of multimodal understanding namely:(1)We replace the vision transformer present in the originalCLIP architecture with a regular transformer architecture,which effectively eliminates the patching process typicallyemployed for 2-D image analysis. This modification is moti-vated by the nature of text and audio.(2)The input to this modified transformer is derived from con-catenated representations of the output of the pretraineddata2vec module for text and audio as described in section3.1 instead of raw tokens for original CLIP.(3)For the text encoder in CLIP, we modify the input fromdiscrete text tokens to continuous motion vectors thus elim-inating the need for an embedding layer. This alteration isintended to mimic the semantic information contained in thetext and audio representations to the motion representationin the joint space of CSMP’s representations.(4)Since the original clip takes discrete and tokenized text asan input it had a context length of 77 this, in the case ofmodalities like the output of data2vec and motion which iscontinuous in nature can be insufficient to capture longer-term dependencies. In order to overcome and increase theencoder’s field of view we increased the context length to500 timesteps.The final architecture of the CSMP module is described in Fig. 1data2vec (text)data2vec (audio)Time aligned input textInput speechText and Audio EncoderMotion EncoderInput motionCLIP lossTextCSMPFigure 1: Architecture of Contrastive Speech Motion Pretrain-ing (CSMP) module.In order to train such an architecture with CLIP loss, we chunkedeach input Xi=[x1,···, xT]in a sliding window manner with awindow length of 500 and a hop length of 250 and formed multiplesplits for each utterance.Xi=[[x1,···, x500],[x250,···, x750],···,[xT−500,···, xT]]We hypothesise that this helped in the generalisation despite a fixedcontext size because the positional encoding could see the data ata specific timestep xtin different relative positions while training.The source code is available on GitHub in GestCLIP branch2.3.3 DDPM for motion synthesisDiffusion models are a recent class of generative models that havebecome popular due to their expressivity and flexible conditioning.Diffusion models are based on the idea that complex data distribu-tions can be learned by iteratively transforming a simple known dis-tribution, such as a Gaussian distribution, through a series of diffu-sion steps. Unlike VAEs, which incorporate latent variable modeling,diffusion models directly model the data distribution without explic-itly introducing latent variables. Diffusion models consist of a for-ward process and a reverse (denoising) process. The forward processdefines a Markov chain of Ndiffusion steps to gradually add noiseto samples from the data distribution x0∼q(z). The noise stepsare assumed to be fixed, zero-mean Gaussian distributions, with-out learnable parameters, q(xn|xn−1)=N(xn;√︁1−βnxn−1, βnI),whereNdenotes the multivariate Gaussian density function evalu-ated at xnand{βn}Nn=1is the noise schedule. In the reverse processthe model learns to reverse the forward process so that the model isable to construct desired data samples from the noise. If βnis smallenough, the reverse step p(xn−1|xn)is also Gaussian and a neuralnetwork is used to approximate the parameters of the distributionpθ(xn−1|xn)=N(xn−1;μθ(xn, n),Σθ(xn, n)).The Denoising Diffusion Probabilistic Model (DDPM) [ 15] sim-plifies the objective of diffusion model and establishes a connectionto score matching, which is a technique used for estimating thegradients of the probability distribution of data. These gradientsare then used to generate samples via Langevin dynamics, which isa stochastic process that simulates the motion of particles in a fluid.In DDPM the score-matching objective is reformulated as noisepredicting objective, L=Ex0,n,ε[κn∥ε−εθ(xn, n)∥22], where εθisa neural network intended to predict the noise εthat was added tox0andκnare weights.Conditional generation in diffusion models can be achievedvia classifier-guided or classifier-free models. In classifier guideddiffusion models the gradients of a separately trained classifierfφ(y|xn)is used to guide the diffusion process ∇xfφ(y|xn)[11].Classifier-free diffusion models combine conditional and uncon-ditional diffusion in order to guide the diffusion. In the above for-mulation this means that a conditional network εθ(xn, n, c), withconditioning input cis trained, where the conditioning informa-tion is randomly discarded during training, so that in the reversediffusion process conditional generation can be achieved by thecombination of the input conditioned and unconditioned model ̄εθ(xn, n, c)=εθ(xn, n, c)+γ(εθ(xn, n, c)−εθ(xn, n))[16]. Denois-ing diffusion based conditional generation has been applied invarious domains. In [ 33], the CLIP embedding based conditioninginput is randomly set to zero in order to achieve high quality imagesynthesis. DiffWave [ 20] is a denoising diffusion based model forwaveform generation, which uses mel spectograms and speaker IDas conditioning information. The Listen-Denoise-Act (LDA) model2https://github.com/shivammehta25/CLIP/tree/GestCLIPICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow[2] builds on the DiffWave model and uses mel spectogram in-formation for human motion synthesis. Audio based conditionalhuman motion synthesis, such as dancing and co-speech gesturegeneration have been a challenge in machine learning, due to theambiguity and high versatility required for good performance inthese tasks. The denoising diffusion based LDA model have provento be a powerful model to generate versatile and expressive motionin the fields of dance and co-speech gesture generation. In our workwe use the residual deonising network of LDA with a conditioningfrom the CSMP module for semantically-aware co-speech gesturegeneration.The LDA model follows DiffWave in parameterising the denois-ing network εθ, but replaces the dilated convolutions in the stackedresidual blocks with a stack of Transformers [ 39] or Conformers[13] in order to capture and integrate information over long timescales. In our experiments we use a stack of 3 translation-invarianttransformers [ 40] in each of the 15 residual blocks. The model learnsa distribution of the form p(x1:T|a1:T), where a1:Tis the acousticconditioning and x1:T=x1:T,0is the output of the diffusion processandxtis a representation of the pose at time step tin the motionsequence. In our case, the mel spectogram based acoustic condition-ing of LDA is replaced with the joint audio and text based output ofthe CSMP module, where the outputs for interlocutor and the mainagent data are concatenated into a conditioning signal of dimen-sion ct∈R1024. This is the conditioning input in the classifier-freediffusion guidance formulation. The outputs of the model are thesame in LDA, poses of skeletal joint rotations parametrised usingan exponential map representation relative to a T-pose, similarlyas in [1].4 DATA PREPARATIONThe challenge dataset is a processed version of the Talking WithHands dataset[ 25]. The original dataset is one of the largest con-versational dataset of motion and voice, incorporating 50 hours ofdyadic interactions, with audio, text and motion modalities. Weonly used the data provided by the challenge for gesture synthesis.4.1 Audio DC-removal and muting of cross-talkWe found that the audio data contained a number of loud transientclicking noises. On inspection, it was found that they were due toa significant DC-offset, in combination with the fact that certainsections of the audio signal had been zeroed out, as part of ananonymization process. This was easily rectified by subtracting themean from all non-zeroed out portions.Additionally, the data contained a non-negligible amount ofcross-talk between the two speakers in the recording. We usedthe time stamps from the time-aligned text transcriptions to muteall audio falling outside of the intervals marked as speech in thetranscription for each speaker. We used a 200 ms ramp function forthe muting to avoid introducing transients.4.2 Motion capture data cleaningWe also noticed that some of the motion capture data containederrors such as joints suddenly popping to unnatural poses. Theseerrors were predominantly confined to the wrist joints, but alsooccurred at the hips. As such problems has an impact model training,and we even found our model reproducing them in synthesis, weperformed some data cleanup. We transformed the data to jointpositions and detected discontinuities in the wrist speeds using aHampel filter. This was followed by a manual check of the affectedfiles. In the end, 17 files were removed from the training set.5 SYSTEM OVERVIEWSchematic view of the final system can be seen in Figure 2. Thesystem was trained on a NVIDIA GeForce RTX 3090 for 387.4ksteps and achieved 0.013loss on the training and 0.019loss on thevalidation set. No post-processing was applied on the generatedoutput motions.6 EVALUATIONThe evaluation of the generated motions was carried out by theGENEA Challenge organisers, details about the evaluation inter-face and experiment setups can be found in the evaluation paper[24]. The generated co-speech gesture were evaluated in three sep-arate perceptual studies: human-likeness, appropriateness to theagent’s speech and appropriateness to the interlocutor’s motionand speech. The evaluation included two baseline conditions andthe natural motion taken from the motion-capture recordings. Themonadic baseline (‘BM’) was generated with [ 6] which uses in-formation from the main-agent for gesture generation, while thedyadic baseline (‘BD’) is an adapted version of the former, whichalso includes information from the interlocutor in the conversation.The study participants were recruited through a crowd-sourcingplatform from English-speaking countries and each study incorpo-rated attention checks. Our system, labeled as ‘SG’ achieved topperformance in the studies of human-likeness and speech appro-priateness based on the generated motions submitted. However, itranked among the lowest in terms of interlocutor appropriateness.6.0.1 Human-likeness evaluation. The aim of this study was toevaluate whether the generated motion of the virtual characterlooks like the motion of a real human. No audio was used in or-der to disentangle the human-likeness evaluation from the speechappropriateness. The evaluation was based on the HEMVIP method-ology [ 19], where multiple different motion samples are presentedin parallel and the participant is asked to rate each sample. Par-ticipant could give their ratings on a scale from 0 (worst) to 100(best). Results for the evaluation are shown on Figure 3. Our system,denoted as ‘SG’, achieved best performance from the entries, withmean rating of 65.6±1.4. Figure 4 also shows that this results issignificantly better than all of the entries, except ‘SF’. Interestingly,the human-likeness score is very close to mean rating of the naturalcondition, which was 68.4±1.4as seen on Table 1. This indicatesthat our system can generate co-speech gestures which resemblesthe motion of real humans.6.0.2 Appropriateness to speech. The aim of this study was to eval-uate whether the motion if the virtual character is appropriate forthe given speech, controlling the overall human-likeness of themotion. The participants were presented with a pair of matchedand mismatched videos from the same condition in order to disen-tangle this study from the motion quality evaluation. Five responseoptions were given for indicating preference over the 2 videos andDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, FranceResidual denoising networkText and AudioEncoderdata2vec(text)data2vec(audio)Time alignedinput textInput speechCSMPSynthesised motionMain agent+ InterlocutorFigure 2: Architecture of the motion synthesis moduleHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 3: Box plot visualising the ratings distribution inthe human-likeness study. Red bars are the median rat-ings (each with a 0.05 confidence interval); yellow dia-monds are mean ratings (also with a 0.05 confidence inter-val). Box edges are at 25 and 75 percentiles, while whiskerscover 95% of all ratings for each condition. Conditionsare ordered by descending sample median rating....over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 4: Significance of pairwise differences betweenconditions. White means the condition listed on the y-axis achieved an MAS significantly above the conditionon the x-axis, black means the opposite ( yscored belowx), and grey means no statistically significant differenceat level α= 0.05 after correction for the false discoveryrate. Conditions use the same order as the correspondingsubfigure in Figure 3the responses were converted to integer values in the range of[−2,2]. Our system achieved a MAS score of 0.39±0.07at the levelofα=0.05and the matched motion was preferred over the mis-matched in 61.8% of the evaluations. With these results it rankedhighest amongst the generated motions. Figure 5 visualizes the sig-nificant differences between conditions and shows that our system,denoted by ‘SG’, was significantly more appropriate to speech thanall of the entries of generated motions. Comparison to other entriescan be found in Table 1.6.0.3 Appropriateness to interlocutor. The aim of this study was toevaluate whether the motion of the virtual character is appropriatefor the given interlocutor behavior (speech and motion). In orderto evaluate the mismatched condition, synthetic interactions werecreated, where the main agent was the same, but the interlocutor be-havior was replaced with one from another interaction. Our systemachieved a MAS score of −0.09±0.08at the level of α=0.05andthe matched motion was preferred over the mismatched in 46.7%of the evaluations. With these results it ranked among the lowest.Figure 6 visualizes the significant differences between conditionsand shows that our system, denoted by ‘SG’, was significantly lessappropriate to interlocutor than all half of the entries of generatedmotions and there was no significant difference to the other half.Comparison to other entries can be found in Table 1.The MP4-format video stimuli used in the user studies can beaccessed through the following link: https://zenodo.org/record/8211449. As before, our system is denoted as ‘SG’.ICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas BeskowNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y... Figure 6: Appropriateness for the interlocutorFigure 7: Significant differences between conditions in the two appropriateness studies. White means the condition listed onthey-axis achieved an MAS significantly above the condition on the x-axis, black means the opposite ( yscored below x), andgrey means no statistically significant difference at level α= 0.05 after correction for the false discovery rate.Table 1: Summary of results for subjective evaluation studies with confidence intervals for the mean appropriateness score (MAS)at the level α=0.05. “Pref. matched” identifies how often test-takers preferred matched motion in terms of appropriateness,after splitting ties equally.Human-likeness Speech appropriateness Interlocutor appropriatenessCondition Median Mean Condition MAS Pref.M. Condition MAS Pref.M.NA 71∈[70,71] 68.4±1.0 NA 0.81±0.06 73.6% NA 0.63 ±0.08 67.9%SG 69∈[67,70] 65.6±1.4 SG 0.39±0.07 61.8% SA 0.09 ±0.06 53.5%SF 65∈[64,67] 63.6±1.3 SJ 0.27±0.06 58.4% BD 0.07 ±0.06 53.0%SJ 51∈[50,53] 51.8±1.3 BM 0.20±0.05 56.6% SB 0.07 ±0.08 51.8%SL 51∈[50,51] 50.6±1.3 SF 0.20±0.06 55.8% SL 0.07 ±0.06 53.4%SE 50∈[49,51] 50.9±1.3 SK 0.18±0.06 55.6% SE 0.05 ±0.07 51.8%SH 46∈[44,49] 45.1±1.5 SI 0.16±0.06 55.5% SF 0.04 ±0.06 50.9%BD 46∈[43,47] 45.3±1.4 SE 0.16±0.05 54.9% SI 0.04 ±0.08 50.9%SD 45∈[43,47] 44.7±1.3 BD 0.14±0.06 54.8% SD 0.02 ±0.07 52.2%BM 43∈[42,45] 42.9±1.3 SD 0.14±0.06 55.0% BM -0.01 ±0.06 49.9%SI 40∈[39,43] 41.4±1.4 SB 0.13±0.06 55.0% SJ -0.03 ±0.05 49.1%SK 37∈[35,40] 40.2±1.5 SA 0.11±0.06 53.6% SC -0.03 ±0.05 49.1%SA 30∈[29,31] 32.0±1.3 SH 0.09±0.07 52.9% SK -0.06 ±0.05 47.4%SB 24∈[23,27] 27.4±1.3 SL 0.05±0.05 51.7% SG -0.09 ±0.08 46.7%SC 9∈[9,9] 11.6±0.9 SC -0.02±0.04 49.1% SH -0.21 ±0.05 44.0%7 DISCUSSIONThe subjective evaluation results have shown that our system is ca-pable of generating of co-speech gestures that are human-like andspeech appropriate. The high performance on the speech appropri-ateness shows that the current system is a promising approach toachieve semantically-aware co-speech gesture generation in virtualagents.Our system was top-ranked in the human-likeness and appro-priateness for agent speech evaluations, while receiving one of thelowest scores in the appropriateness to interlocutor evaluation. Thismight seem a bit counter intuitive, given that we indeed trainedthe system to listen to the interlocutor. We believe that there aremultiple factors at play here and will outline them below. First, outsystem was trained to take in speech information of the interlocutoras input (in the form of CSMP embeddings), but we chose to notinclude interlocutor motion as one of the inputs, due to time con-straints. Feeding interlocutor motion as input might have rendereda system capable of mirroring/mimicry, similar to [ 18] which couldhave resulted in a higher rating. Secondly, we would like to discussanother possible explanation, which stems from the nature of theDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, Francedata and how the evaluation was carried out. In the appropriate-ness evaluations, each system was compared against itself, and theobjective was to see to what degree raters could distinguish motionthat matched the context from mis-matched motion. As mentionedin section 4.1, there was a certain amount of cross-talk present inthe data, i.e. the interlocutor audio was present in the main agent’schannel and vice versa. We took extra measures to eliminate suchcross-talk, because not doing so would have resulted in the agentperforming co-speech gestures also while listening, based on thecross-talk from the interlocutor. Inspecting the evaluation stimulibased on the output from the different systems in the challenge, itis clear that this seems to happen in certain systems. We can fur-ther speculate that such an agent might in fact score favourably inthe match/mismatch paradigm, because the gestures would indeedbe interlocutor aware. Future work on improving the interlocutorappropriateness could involve conditioning on interlocutor mo-tion, as mentioned above, or training a separate model for listeningbehavior.Additional evaluations on the semantic gesture generation capa-bilities of the model could be of interest for future work. In theory,our model is capable of capturing the semantic relations betweenspeech and gesture spaces through the CSMP model. However,the current subjective evaluation is a bit limited in measuring thesemantic gesture generation capabilities of the model, as it is dif-ficult to disentangle from other aspects, such as speech-gesturesynchrony. Objective evaluation metrics for semantic appropriate-ness could be helpful in quantifying and improving our system inthis regard.8 CONCLUSIONSIn this paper we described our entry system to the GENEA Chal-lenge 2023. We presented a system, which builds on an existingdiffusion based motion synthesis model and proposed a condition-ing signal, which utilizes audio, text and motion data. For this weproposed a CLIP-like contrastive pre-training module, contrastivespeech and motion pretraining (CSMP) in order to capture the un-derlying relations between speech and motion. Our system achievedtop performance in human-likeness and speech appropriatenessamongst the submitted entries, which proves that our system isa promising approach to generate human-like co-speech gesturesin agents. Our system ranked relatively low in interlocutor ap-prorpiateness, which is a focus in future work for improvement.Human-like, semantic and interlocutor appropriate co-speech ges-ture generation in virtual agents is still an open problem. Oursystems high performance in the subjective evaluations is encour-aging and indicates that our submitted model is a promising wayto achieve these goals.ACKNOWLEDGMENTSThis work was partially supported by the Advanced Adaptive In-telligent Agents project (Digital Futures), the Wallenberg AI, Au-tonomous Systems and Software Program (WASP) funded by theKnut and Alice Wallenberg Foundation and by grant no. 20023495(Development of behavior-oriented HRI AI technology for long-term interaction between service robots and users) funded by theKorean Ministry of Trade, Industry and Energy (MOTIE).
5Ds-kc058l
Contrastive pretrained speech feature extractor with good results.
7: Good paper, accept
This paper present a novel method of speech feature extraction for co-speech gesture generation task. It uses contrastive learning to pretrain a speech encoder to extract the most relevant information with the motion. The paper is clearly written and in detail. An explanation on the discrepancy between the evaluation results is also provided. The idea of utilizing pretrained models, i.e., data2vec, as backbone before the encoders in the contrastive learning is interesting. The authors claim that it maybe a way to overcome the lack of large amount of speech-gesture data and I agree with it although more experiments are necessary. It is great to see attempts on using pretrained audio and text models as they may become informative guidance of building a more powerful gesture generation systems. One minor point is that the authors do not have explanations about why their approach works better. Although ablation study could be difficult in the current setting, qualitative results can provide insights as well. One question regarding the generation. The author is using classifier-free guidance in the generation. However, a CLIP that accepts noised version of motion can be trained to apply classifier guidance, which has been proposed in conditional image generation previously [1]. I am wondering why the authors chose classifier-free guidance over classifier guidance. [1] GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
vD3_u_kbkqS
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Diffusion-based co-speech gesture generation using joint text and audio representation
["Anna Deichler", "Shivam Mehta", "Simon Alexanderson", "Jonas Beskow"]
This paper describes a system developed for the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge 2023. Our solution builds on an existing diffusion-based motion synthesis model. We propose a contrastive speech and motion pretraining (CSMP) module, which learns joint embeddings for speech and gestures with the aim to learn a semantic coupling between these modalities. The output of the CSMP module is used as a conditioning signal in the diffusion-based gesture synthesis model in order to achieve semantically-aware co-speech gesture generation. Our entry achieved highest human-likeness and highest speech appropriateness rating among the submitted entries. This indicates that our system is a promising approach to achieve human-like co-speech gestures in agents that carry semantic meaning.
["gesture generation", "semantic gestures", "motion synthesis", "diffusion models", "contrastive pre-training"]
ABSTRACTThis paper describes a system developed for the GENEA (Genera-tion and Evaluation of Non-verbal Behaviour for Embodied Agents)Challenge 2023. Our solution builds on an existing diffusion-basedmotion synthesis model. We propose a contrastive speech and mo-tion pretraining (CSMP) module, which learns a joint embeddingfor speech and gesture with the aim to learn a semantic couplingbetween these modalities. The output of the CSMP module is usedas a conditioning signal in the diffusion-based gesture synthesismodel in order to achieve semantically-aware co-speech gesturegeneration. Our entry achieved highest human-likeness and high-est speech appropriateness rating among the submitted entries.This indicates that our system is a promising approach to achievehuman-like co-speech gestures in agents that carry semantic mean-ing.KEYWORDSgesture generation, motion synthesis, diffusion models, contrastivepre-training, semantic gesturesACM Reference Format:Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow. 2023.Diffusion-Based Co-Speech Gesture Generation Using Joint Text and AudioRepresentation. In INTERNATIONAL CONFERENCE ON MULTIMODAL IN-TERACTION (ICMI ’23), October 09–13, 2023, Paris, France. ACM, New York,NY, USA, 8 pages. https://doi.org/10.1145/3577190.36161171 INTRODUCTIONHuman communication is inherently multimodal involving the in-tegration of multiple verbal and non-verbal modalities to conveythe information. These modalities work in synergy, collaborating tocreate a joint representation of the message the speaker intends toconvey [ 29]. In addition to complementing verbal communication,these non-verbal gestures frequently serve as substitutes for wordsThis work is licensed under a Creative Commons Attribution International4.0 License.ICMI ’23, October 09–13, 2023, Paris, France©2023 Copyright held by the owner/author(s).ACM ISBN 979-8-4007-0055-2/23/10.https://doi.org/10.1145/3577190.3616117[9,31]. The semantic meaning contribution of gestures is multi-faceted. Beat gestures primarily emphasize the verbally expressedcontent, serving to accentuate the spoken message. On the otherhand, iconic and pointing gestures go beyond emphasizing content;they directly represent or indicate the referent being discussed.Deictic pointing gestures, often accompanying deictic words, play acrucial role in referential communication by providing vital contex-tual information for reference disambiguation, while iconic gesturesserve to visually represent or symbolize the attributes, actions, orcharacteristics associated with the referent.Co-speech gesture generation in robotics and avatars focuses ongenerating gestures that accompany and extend the verbal modal-ity. However, the generation of audio-driven motion has posed asignificant challenge. This difficulty arises from the fact that suchmotion can be accurately predicted by very strong probabilisticmodels, since gestures exhibit high individual variability, are inher-ently non-deterministic [ 2]. Recent advances in learning arbitraryprobability distributions with diffusion models has offered a wayto tackle this problem. These audio-driven gesture generation mod-els have proven to be efficient in reproducing the high variabilityand expressivity of human gestures, however integrating seman-tic content into gesture generation by combining audio and textconditioning is another challenge.Self-supervised pre-training methods have proven to be an ef-ficient way to learn useful representations for downstream tasks,especially in case of limited labeled data. Multi-modal pre-trainingmethods learn embedding spaces that encode useful relations ofdifferent data modalities. Contrastive Language-Image Pre-Training(CLIP) [ 32] is a contrastive multi-modal pre-training method thatlearns a joint representation of image and text data by contrastingpositive and negative text-image pair examples in the latent spaceduring training. This training approach encourage the model tocapture the underlying relationship between the two modalities.The problem of co-speech gesture generation involves multiplemodalities, with a tight coupling between motion, text and audio.This work aims at combining the expressivity of diffusion basedmotion synthesis [ 2] with the multi-modal understanding of a CLIP-like latent embedding space that models the relations betweenmotion, text and audio in co-speech gestures.ICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow2 RELATED WORK2.1 Co-speech gesture generationThe primary goal of co-speech gesture generation is to synthesisenatural and contextually appropriate gestures. In the early stagesof gesture generation research, various rule-based approaches wereemployed [ 5,26,27], where the generation of gestures was triggeredby predefined rules that initiated the playback of pre-recordedgestures. In recent years, this field has been dominated by the useof data-driven deep learning based modelling methodologies [31].Early works on deep learning-based gesture synthesis treatedit as a regression problem and utilised recurrent [ 14,36] and con-volutional [ 21] neural networks to model the generation process.Treating gesture synthesis as a regression problem leads to the prob-lem of under-articulated and over-smoothened gestures becauseof averaging over all the possible outcomes for an input signal. Toaddress the challenge of under-articulated and over-smoothenedsynthesis researchers employed various probabilistic modellingtechniques such as VAEs [ 12], VQ-VAEs [ 43], Normalising Flows[1] or adversarial techniques like GANs [ 41,42]. These methodolo-gies aim to enhance the realism and expressiveness of the generatedgestures by learning a distribution over the entire utterances andsampling different realisations from it or learning powerful transfor-mations from a simple distribution, usually a Gaussian distribution,to the output motion distribution.Diffusion models [ 15,34,35] have emerged as a notable and con-temporary probabilistic generative modelling methodology. Thesemodels have shown promise in capturing complex data distribu-tions and have gained attention in various fields, including gesturegeneration [ 2,3,30,45]. Inspired by these works our system usesDenoising Diffusion Probabilistic Modelling (DDPM) [ 15] formu-lation with self-supervised representations to synthesise gesturesconditioned on the input audio.2.2 Semantic gesture generationIn order to generate contextually appropriate gestures in agents, itis crucial to take into account gesture semantics. Semantic gestureshave a symbolic representational quality and contribute to theoverall meaning in communication. The generation of semanticgestures is highly reliant on which input modalities are taken intoaccount in the modeling process [31].Audio driven generation can reproduce the coupling betweengesture kinematics and the intonation, stress and rhythm present inthe audio signal. These systems are good at modeling beat gestures,which can help highlight important points or add emphasis tocertain words or phrases [ 28],[1],[2]. However, in order to generaterepresentational gestures (e.g., iconic, deictic pointing), additionalinput modalities are needed. Text-based conditioning is essentialto model the relation between semantic and kinematic spaces inorder to generate iconic gestures [ 44],[22], while the generation ofdeictic pointing gestures needs referential target information [ 10].In this work we develop a novel approach to jointly model audioand text conditioning in gesture generation through a contrastiveself-supervised learning approach in order to extend the existingaudio conditioned system with semantic capabilities.2.3 Using language based pre-trainingapproaches in motion generationRecent works approaches have leveraged different pre-training ap-proaches to learn the semantic coupling between text and motionspaces. [ 46] uses a GPT-like module to generate code indices basedon text embeddings which are utilized by a VQ-VAE module inmotion generation, while [ 17] proposes MotionGPT, which per-forms language modeling on both motion and text in a unifiedmanner, treating human motion as a specific language. Previouswork has also leveraged CLIP’s multimodal understanding to gener-ate meaningful motion. [ 37] develops an auto-encoder based motiongeneration model, which learns a motion embedding space alignedwith CLIP’s latent space, which allows for the generation of ex-pressive and versatile text-based motions. [ 38] uses CLIP latents asconditioning information in diffusion based human motion genera-tion. Similarly, [ 8] conditions on CLIP latents, but combines latentspace based and diffusion based motion generation. Most similar toour work is [ 3], which learns a gesture-text joint embedding usingcontrastive learning and a CLIP based style encoding module in adiffusion based gesture synthesis model.3 METHOD3.1 Self-supervised representations of text andaudioWe employ pre-trained self-supervised representations for text andaudio for both the main agent and the interlocutor. Data2vec [ 4]which is a framework for self-supervised representation learningon data of different modalities (text, audio and images), for whichpre-trained models are available1. Data2vec leverages transformerarchitecture in a self-distillation setup to achieve contextual textembedding, predicting latent representations of the full input databased on a masked view of the input.For audio, we use the data2vec-audio-base-960h model, whichtakes one-channel 16 Khz audio as input. As output we use the lasthidden layer, which gives us a sequence of 768-dimensional embed-ding vectors at a rate of 50 Hz. The output is then converted to 30Hz using polyphase resampling ( scipy.signal.resample_poly )in order to match the frame rate of the motion data.For text, we use the data2vec-text-base model. Input to themodel is a sequence of byte-pair encoded text tokens. Just as for theaudio, we use the last hidden layer of the data2vec model to obtaina 768-dimensional vector for each input token. We use the wordtimed transcriptions provided in the dataset (see [ 23]) to maintaina start and end time for each token, then we replicate the outputvector at a rate of 30 Hz for the duration of the token, The result isa text-embedding sequence that is aligned with, and of the samelength as, the audio and motion data sequences.3.2 Join representation with Contrastive Speechand Motion Pretraining (CSMP)Contrastive pre-training can effectively capture the semantic rela-tionships between two modalities but usually, it requires a largerbatch size and larger dataset to learn efficient joint representations[7] which can be challenging especially in this case because of1huggingface.coDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, Francedataset-specific properties [ 23] such as the presence of an interlocu-tor and the skeletal nodes of the characters. In such a case havingrepresentations which already capture semantic information can beused as the inputs to the CLIP module. Therefore, we devise a varia-tion of CLIP and call it Contrastive Speech and Motion Pretraining(CSMP).In CSMP, we propose several modifications to the original CLIParchitecture within the context of multimodal understanding namely:(1)We replace the vision transformer present in the originalCLIP architecture with a regular transformer architecture,which effectively eliminates the patching process typicallyemployed for 2-D image analysis. This modification is moti-vated by the nature of text and audio.(2)The input to this modified transformer is derived from con-catenated representations of the output of the pretraineddata2vec module for text and audio as described in section3.1 instead of raw tokens for original CLIP.(3)For the text encoder in CLIP, we modify the input fromdiscrete text tokens to continuous motion vectors thus elim-inating the need for an embedding layer. This alteration isintended to mimic the semantic information contained in thetext and audio representations to the motion representationin the joint space of CSMP’s representations.(4)Since the original clip takes discrete and tokenized text asan input it had a context length of 77 this, in the case ofmodalities like the output of data2vec and motion which iscontinuous in nature can be insufficient to capture longer-term dependencies. In order to overcome and increase theencoder’s field of view we increased the context length to500 timesteps.The final architecture of the CSMP module is described in Fig. 1data2vec (text)data2vec (audio)Time aligned input textInput speechText and Audio EncoderMotion EncoderInput motionCLIP lossTextCSMPFigure 1: Architecture of Contrastive Speech Motion Pretrain-ing (CSMP) module.In order to train such an architecture with CLIP loss, we chunkedeach input Xi=[x1,···, xT]in a sliding window manner with awindow length of 500 and a hop length of 250 and formed multiplesplits for each utterance.Xi=[[x1,···, x500],[x250,···, x750],···,[xT−500,···, xT]]We hypothesise that this helped in the generalisation despite a fixedcontext size because the positional encoding could see the data ata specific timestep xtin different relative positions while training.The source code is available on GitHub in GestCLIP branch2.3.3 DDPM for motion synthesisDiffusion models are a recent class of generative models that havebecome popular due to their expressivity and flexible conditioning.Diffusion models are based on the idea that complex data distribu-tions can be learned by iteratively transforming a simple known dis-tribution, such as a Gaussian distribution, through a series of diffu-sion steps. Unlike VAEs, which incorporate latent variable modeling,diffusion models directly model the data distribution without explic-itly introducing latent variables. Diffusion models consist of a for-ward process and a reverse (denoising) process. The forward processdefines a Markov chain of Ndiffusion steps to gradually add noiseto samples from the data distribution x0∼q(z). The noise stepsare assumed to be fixed, zero-mean Gaussian distributions, with-out learnable parameters, q(xn|xn−1)=N(xn;√︁1−βnxn−1, βnI),whereNdenotes the multivariate Gaussian density function evalu-ated at xnand{βn}Nn=1is the noise schedule. In the reverse processthe model learns to reverse the forward process so that the model isable to construct desired data samples from the noise. If βnis smallenough, the reverse step p(xn−1|xn)is also Gaussian and a neuralnetwork is used to approximate the parameters of the distributionpθ(xn−1|xn)=N(xn−1;μθ(xn, n),Σθ(xn, n)).The Denoising Diffusion Probabilistic Model (DDPM) [ 15] sim-plifies the objective of diffusion model and establishes a connectionto score matching, which is a technique used for estimating thegradients of the probability distribution of data. These gradientsare then used to generate samples via Langevin dynamics, which isa stochastic process that simulates the motion of particles in a fluid.In DDPM the score-matching objective is reformulated as noisepredicting objective, L=Ex0,n,ε[κn∥ε−εθ(xn, n)∥22], where εθisa neural network intended to predict the noise εthat was added tox0andκnare weights.Conditional generation in diffusion models can be achievedvia classifier-guided or classifier-free models. In classifier guideddiffusion models the gradients of a separately trained classifierfφ(y|xn)is used to guide the diffusion process ∇xfφ(y|xn)[11].Classifier-free diffusion models combine conditional and uncon-ditional diffusion in order to guide the diffusion. In the above for-mulation this means that a conditional network εθ(xn, n, c), withconditioning input cis trained, where the conditioning informa-tion is randomly discarded during training, so that in the reversediffusion process conditional generation can be achieved by thecombination of the input conditioned and unconditioned model ̄εθ(xn, n, c)=εθ(xn, n, c)+γ(εθ(xn, n, c)−εθ(xn, n))[16]. Denois-ing diffusion based conditional generation has been applied invarious domains. In [ 33], the CLIP embedding based conditioninginput is randomly set to zero in order to achieve high quality imagesynthesis. DiffWave [ 20] is a denoising diffusion based model forwaveform generation, which uses mel spectograms and speaker IDas conditioning information. The Listen-Denoise-Act (LDA) model2https://github.com/shivammehta25/CLIP/tree/GestCLIPICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow[2] builds on the DiffWave model and uses mel spectogram in-formation for human motion synthesis. Audio based conditionalhuman motion synthesis, such as dancing and co-speech gesturegeneration have been a challenge in machine learning, due to theambiguity and high versatility required for good performance inthese tasks. The denoising diffusion based LDA model have provento be a powerful model to generate versatile and expressive motionin the fields of dance and co-speech gesture generation. In our workwe use the residual deonising network of LDA with a conditioningfrom the CSMP module for semantically-aware co-speech gesturegeneration.The LDA model follows DiffWave in parameterising the denois-ing network εθ, but replaces the dilated convolutions in the stackedresidual blocks with a stack of Transformers [ 39] or Conformers[13] in order to capture and integrate information over long timescales. In our experiments we use a stack of 3 translation-invarianttransformers [ 40] in each of the 15 residual blocks. The model learnsa distribution of the form p(x1:T|a1:T), where a1:Tis the acousticconditioning and x1:T=x1:T,0is the output of the diffusion processandxtis a representation of the pose at time step tin the motionsequence. In our case, the mel spectogram based acoustic condition-ing of LDA is replaced with the joint audio and text based output ofthe CSMP module, where the outputs for interlocutor and the mainagent data are concatenated into a conditioning signal of dimen-sion ct∈R1024. This is the conditioning input in the classifier-freediffusion guidance formulation. The outputs of the model are thesame in LDA, poses of skeletal joint rotations parametrised usingan exponential map representation relative to a T-pose, similarlyas in [1].4 DATA PREPARATIONThe challenge dataset is a processed version of the Talking WithHands dataset[ 25]. The original dataset is one of the largest con-versational dataset of motion and voice, incorporating 50 hours ofdyadic interactions, with audio, text and motion modalities. Weonly used the data provided by the challenge for gesture synthesis.4.1 Audio DC-removal and muting of cross-talkWe found that the audio data contained a number of loud transientclicking noises. On inspection, it was found that they were due toa significant DC-offset, in combination with the fact that certainsections of the audio signal had been zeroed out, as part of ananonymization process. This was easily rectified by subtracting themean from all non-zeroed out portions.Additionally, the data contained a non-negligible amount ofcross-talk between the two speakers in the recording. We usedthe time stamps from the time-aligned text transcriptions to muteall audio falling outside of the intervals marked as speech in thetranscription for each speaker. We used a 200 ms ramp function forthe muting to avoid introducing transients.4.2 Motion capture data cleaningWe also noticed that some of the motion capture data containederrors such as joints suddenly popping to unnatural poses. Theseerrors were predominantly confined to the wrist joints, but alsooccurred at the hips. As such problems has an impact model training,and we even found our model reproducing them in synthesis, weperformed some data cleanup. We transformed the data to jointpositions and detected discontinuities in the wrist speeds using aHampel filter. This was followed by a manual check of the affectedfiles. In the end, 17 files were removed from the training set.5 SYSTEM OVERVIEWSchematic view of the final system can be seen in Figure 2. Thesystem was trained on a NVIDIA GeForce RTX 3090 for 387.4ksteps and achieved 0.013loss on the training and 0.019loss on thevalidation set. No post-processing was applied on the generatedoutput motions.6 EVALUATIONThe evaluation of the generated motions was carried out by theGENEA Challenge organisers, details about the evaluation inter-face and experiment setups can be found in the evaluation paper[24]. The generated co-speech gesture were evaluated in three sep-arate perceptual studies: human-likeness, appropriateness to theagent’s speech and appropriateness to the interlocutor’s motionand speech. The evaluation included two baseline conditions andthe natural motion taken from the motion-capture recordings. Themonadic baseline (‘BM’) was generated with [ 6] which uses in-formation from the main-agent for gesture generation, while thedyadic baseline (‘BD’) is an adapted version of the former, whichalso includes information from the interlocutor in the conversation.The study participants were recruited through a crowd-sourcingplatform from English-speaking countries and each study incorpo-rated attention checks. Our system, labeled as ‘SG’ achieved topperformance in the studies of human-likeness and speech appro-priateness based on the generated motions submitted. However, itranked among the lowest in terms of interlocutor appropriateness.6.0.1 Human-likeness evaluation. The aim of this study was toevaluate whether the generated motion of the virtual characterlooks like the motion of a real human. No audio was used in or-der to disentangle the human-likeness evaluation from the speechappropriateness. The evaluation was based on the HEMVIP method-ology [ 19], where multiple different motion samples are presentedin parallel and the participant is asked to rate each sample. Par-ticipant could give their ratings on a scale from 0 (worst) to 100(best). Results for the evaluation are shown on Figure 3. Our system,denoted as ‘SG’, achieved best performance from the entries, withmean rating of 65.6±1.4. Figure 4 also shows that this results issignificantly better than all of the entries, except ‘SF’. Interestingly,the human-likeness score is very close to mean rating of the naturalcondition, which was 68.4±1.4as seen on Table 1. This indicatesthat our system can generate co-speech gestures which resemblesthe motion of real humans.6.0.2 Appropriateness to speech. The aim of this study was to eval-uate whether the motion if the virtual character is appropriate forthe given speech, controlling the overall human-likeness of themotion. The participants were presented with a pair of matchedand mismatched videos from the same condition in order to disen-tangle this study from the motion quality evaluation. Five responseoptions were given for indicating preference over the 2 videos andDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, FranceResidual denoising networkText and AudioEncoderdata2vec(text)data2vec(audio)Time alignedinput textInput speechCSMPSynthesised motionMain agent+ InterlocutorFigure 2: Architecture of the motion synthesis moduleHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 3: Box plot visualising the ratings distribution inthe human-likeness study. Red bars are the median rat-ings (each with a 0.05 confidence interval); yellow dia-monds are mean ratings (also with a 0.05 confidence inter-val). Box edges are at 25 and 75 percentiles, while whiskerscover 95% of all ratings for each condition. Conditionsare ordered by descending sample median rating....over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 4: Significance of pairwise differences betweenconditions. White means the condition listed on the y-axis achieved an MAS significantly above the conditionon the x-axis, black means the opposite ( yscored belowx), and grey means no statistically significant differenceat level α= 0.05 after correction for the false discoveryrate. Conditions use the same order as the correspondingsubfigure in Figure 3the responses were converted to integer values in the range of[−2,2]. Our system achieved a MAS score of 0.39±0.07at the levelofα=0.05and the matched motion was preferred over the mis-matched in 61.8% of the evaluations. With these results it rankedhighest amongst the generated motions. Figure 5 visualizes the sig-nificant differences between conditions and shows that our system,denoted by ‘SG’, was significantly more appropriate to speech thanall of the entries of generated motions. Comparison to other entriescan be found in Table 1.6.0.3 Appropriateness to interlocutor. The aim of this study was toevaluate whether the motion of the virtual character is appropriatefor the given interlocutor behavior (speech and motion). In orderto evaluate the mismatched condition, synthetic interactions werecreated, where the main agent was the same, but the interlocutor be-havior was replaced with one from another interaction. Our systemachieved a MAS score of −0.09±0.08at the level of α=0.05andthe matched motion was preferred over the mismatched in 46.7%of the evaluations. With these results it ranked among the lowest.Figure 6 visualizes the significant differences between conditionsand shows that our system, denoted by ‘SG’, was significantly lessappropriate to interlocutor than all half of the entries of generatedmotions and there was no significant difference to the other half.Comparison to other entries can be found in Table 1.The MP4-format video stimuli used in the user studies can beaccessed through the following link: https://zenodo.org/record/8211449. As before, our system is denoted as ‘SG’.ICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas BeskowNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y... Figure 6: Appropriateness for the interlocutorFigure 7: Significant differences between conditions in the two appropriateness studies. White means the condition listed onthey-axis achieved an MAS significantly above the condition on the x-axis, black means the opposite ( yscored below x), andgrey means no statistically significant difference at level α= 0.05 after correction for the false discovery rate.Table 1: Summary of results for subjective evaluation studies with confidence intervals for the mean appropriateness score (MAS)at the level α=0.05. “Pref. matched” identifies how often test-takers preferred matched motion in terms of appropriateness,after splitting ties equally.Human-likeness Speech appropriateness Interlocutor appropriatenessCondition Median Mean Condition MAS Pref.M. Condition MAS Pref.M.NA 71∈[70,71] 68.4±1.0 NA 0.81±0.06 73.6% NA 0.63 ±0.08 67.9%SG 69∈[67,70] 65.6±1.4 SG 0.39±0.07 61.8% SA 0.09 ±0.06 53.5%SF 65∈[64,67] 63.6±1.3 SJ 0.27±0.06 58.4% BD 0.07 ±0.06 53.0%SJ 51∈[50,53] 51.8±1.3 BM 0.20±0.05 56.6% SB 0.07 ±0.08 51.8%SL 51∈[50,51] 50.6±1.3 SF 0.20±0.06 55.8% SL 0.07 ±0.06 53.4%SE 50∈[49,51] 50.9±1.3 SK 0.18±0.06 55.6% SE 0.05 ±0.07 51.8%SH 46∈[44,49] 45.1±1.5 SI 0.16±0.06 55.5% SF 0.04 ±0.06 50.9%BD 46∈[43,47] 45.3±1.4 SE 0.16±0.05 54.9% SI 0.04 ±0.08 50.9%SD 45∈[43,47] 44.7±1.3 BD 0.14±0.06 54.8% SD 0.02 ±0.07 52.2%BM 43∈[42,45] 42.9±1.3 SD 0.14±0.06 55.0% BM -0.01 ±0.06 49.9%SI 40∈[39,43] 41.4±1.4 SB 0.13±0.06 55.0% SJ -0.03 ±0.05 49.1%SK 37∈[35,40] 40.2±1.5 SA 0.11±0.06 53.6% SC -0.03 ±0.05 49.1%SA 30∈[29,31] 32.0±1.3 SH 0.09±0.07 52.9% SK -0.06 ±0.05 47.4%SB 24∈[23,27] 27.4±1.3 SL 0.05±0.05 51.7% SG -0.09 ±0.08 46.7%SC 9∈[9,9] 11.6±0.9 SC -0.02±0.04 49.1% SH -0.21 ±0.05 44.0%7 DISCUSSIONThe subjective evaluation results have shown that our system is ca-pable of generating of co-speech gestures that are human-like andspeech appropriate. The high performance on the speech appropri-ateness shows that the current system is a promising approach toachieve semantically-aware co-speech gesture generation in virtualagents.Our system was top-ranked in the human-likeness and appro-priateness for agent speech evaluations, while receiving one of thelowest scores in the appropriateness to interlocutor evaluation. Thismight seem a bit counter intuitive, given that we indeed trainedthe system to listen to the interlocutor. We believe that there aremultiple factors at play here and will outline them below. First, outsystem was trained to take in speech information of the interlocutoras input (in the form of CSMP embeddings), but we chose to notinclude interlocutor motion as one of the inputs, due to time con-straints. Feeding interlocutor motion as input might have rendereda system capable of mirroring/mimicry, similar to [ 18] which couldhave resulted in a higher rating. Secondly, we would like to discussanother possible explanation, which stems from the nature of theDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, Francedata and how the evaluation was carried out. In the appropriate-ness evaluations, each system was compared against itself, and theobjective was to see to what degree raters could distinguish motionthat matched the context from mis-matched motion. As mentionedin section 4.1, there was a certain amount of cross-talk present inthe data, i.e. the interlocutor audio was present in the main agent’schannel and vice versa. We took extra measures to eliminate suchcross-talk, because not doing so would have resulted in the agentperforming co-speech gestures also while listening, based on thecross-talk from the interlocutor. Inspecting the evaluation stimulibased on the output from the different systems in the challenge, itis clear that this seems to happen in certain systems. We can fur-ther speculate that such an agent might in fact score favourably inthe match/mismatch paradigm, because the gestures would indeedbe interlocutor aware. Future work on improving the interlocutorappropriateness could involve conditioning on interlocutor mo-tion, as mentioned above, or training a separate model for listeningbehavior.Additional evaluations on the semantic gesture generation capa-bilities of the model could be of interest for future work. In theory,our model is capable of capturing the semantic relations betweenspeech and gesture spaces through the CSMP model. However,the current subjective evaluation is a bit limited in measuring thesemantic gesture generation capabilities of the model, as it is dif-ficult to disentangle from other aspects, such as speech-gesturesynchrony. Objective evaluation metrics for semantic appropriate-ness could be helpful in quantifying and improving our system inthis regard.8 CONCLUSIONSIn this paper we described our entry system to the GENEA Chal-lenge 2023. We presented a system, which builds on an existingdiffusion based motion synthesis model and proposed a condition-ing signal, which utilizes audio, text and motion data. For this weproposed a CLIP-like contrastive pre-training module, contrastivespeech and motion pretraining (CSMP) in order to capture the un-derlying relations between speech and motion. Our system achievedtop performance in human-likeness and speech appropriatenessamongst the submitted entries, which proves that our system isa promising approach to generate human-like co-speech gesturesin agents. Our system ranked relatively low in interlocutor ap-prorpiateness, which is a focus in future work for improvement.Human-like, semantic and interlocutor appropriate co-speech ges-ture generation in virtual agents is still an open problem. Oursystems high performance in the subjective evaluations is encour-aging and indicates that our submitted model is a promising wayto achieve these goals.ACKNOWLEDGMENTSThis work was partially supported by the Advanced Adaptive In-telligent Agents project (Digital Futures), the Wallenberg AI, Au-tonomous Systems and Software Program (WASP) funded by theKnut and Alice Wallenberg Foundation and by grant no. 20023495(Development of behavior-oriented HRI AI technology for long-term interaction between service robots and users) funded by theKorean Ministry of Trade, Industry and Energy (MOTIE).
66hCQcJDvg
This paper introduces a diffusion-based co-speech gesture synthesis system, effectively using self-supervised multimodal features. I choose to recommend its acceptance to the ICMI workshop.
8: Top 50% of accepted papers, clear accept
This paper proposes a diffusion-based co-speech gesture synthesis system, which successfully explores the use of self-supervised pre-trained joint multimodal input features. How to efficiently mine the multimodal relations between speech and gestures via the CLIP-like pre-training strategy, I believe, is the right direction. The challenge results achieved by this paper strongly support this claim. Main Pros: * A diffusion-based model achieving high motion quality; * Employment of contrastive speech motion pre-training, enhancing the semantic awareness of the system. Typos & Questions: * Line 10-11: Our "solutions" -> "solution"; * Offering some visualization results is encouraged. To sum up, I think this paper proposes an efficient framework to solve the weak semantics problem in the field of deep-based gesture generation. I would like to recommend that this work be accepted to the ICMI workshop.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S9Efb3MoiZ
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Gesture Generation with Diffusion Models Aided by Speech Activity Information
["Rodolfo Luis Tonoli", "Leonardo Boulitreau de Menezes Martins Marques", "Lucas Hideki Ueda", "Paula Dornhofer Paro Costa"]
This paper describes a gesture generation model based on state-of-the-art diffusion models. Novel adaptations were introduced to improve motion appropriateness relative to speech and human-likeness. Specifically, the main focus was to enhance gesture responsiveness to speech audio. In particular, we explored using a pre-trained Voice Activity Detector (VAD) to obtain more meaningful audio representations. The proposed model was submitted to the GENEA Challenge 2023. Perceptual experiments compared our model, labeled SH, with other submissions to the challenge. The results indicated that our model achieved competitive levels of human-likeness. While appropriateness to the agent's speech score was lower than most entries, there were no statistically significant differences from most models at the confidence level.
["Gesture generation", "co-speech gestures", "diffusion models"]
ABSTRACTThis paper describes a gesture generation model based on state-of-the-art diffusion models. Novel adaptations were introduced toimprove motion appropriateness relative to speech and human-likeness. Specifically, the main focus was to enhance gesture re-sponsiveness to speech audio. We explored using a pre-trainedVoice Activity Detector (VAD) to obtain more meaningful audiorepresentations. The proposed model was submitted to the GE-NEA Challenge 2023. Perceptual experiments compared our model,labeled SH, with other submissions to the challenge. The resultsindicated that our model achieved competitive levels of human-likeness. While appropriateness to the agent’s speech score waslower than most entries, there were no statistically significant dif-ferences from most models at the confidence level.CCS CONCEPTS•Computing methodologies →Animation ;Intelligent agents ;Machine learning.KEYWORDSGesture generation, co-speech gestures, diffusion modelsACM Reference Format:Rodolfo L. Tonoli, Leonardo B. de M. M. Marques, Lucas H. Ueda, and Paula D.P. Costa. 2023. Gesture Generation with Diffusion Models Aided by SpeechActivity Information. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM,New York, NY, USA, 7 pages. https://doi.org/10.1145/3610661.36165541 INTRODUCTIONHuman communication is composed of verbal and nonverbal be-haviours. Co-speech gestures are one of these behaviours. They are∗Both authors contributed equally to this research.†Also with Artificial Intelligence Lab., Recod.ai, Institute of Computing, University ofCampinas, SP, Brazil..Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616554visible actions of any body part produced while speaking and mayserve different purposes, such as to provide emphasis or to depictsome physical property [ 30]. Being such a key part of human com-munication, gestures are employed in embodied agents to simulatereal interactions and create believable characters [ 29]. Otherwise,these agents may be perceived as lifeless or dull.Recent research focused on automatic gesture generation (orsynthesis) through deep learning. Such systems are able to ani-mate embodied agents much faster and less time-demanding thantraditional techniques such as hand-crafted animations or motioncapture. Additionally, these techniques may not be suited for appli-cations whose speech content is unknown beforehand, such as anavatar being controlled by a human or an embodied agent poweredby a language model.Most research on gesture generation has a cross-modal mappingapproach to this problem, similar to a translation between differentbehaviour modalities [ 4]. Also, gestures are correlated with prosodyand may be associated with semantics [ 21]. Thus, most systems usespeech audio, speech text, or both to guide gesture generation [ 23].However, synthetic data still struggles to appear human-like andappropriate to speech if compared to real human data [ 33]. Morechallenging scenarios could widen the gap between synthetic andreal data. For example, in dyadic interactions, people are expected totake turns being the active speaker for brief or long moments. Mostresearch has not addressed such situations. We propose a monadicgesture generation model that considers the voice activity for betteralignment and responsiveness of gestures given speech audio. Themodel is based on a composition of the DiffuseStyleGesture [ 32],a speech-driven diffusion model and the Motion Diffusion Model(MDM) [ 26], which is text-driven. The main contributions of thispaper to the aforementioned models are:•the integration of voice activity information to improveturn-taking and speech audio synchrony while using onlymonadic inputs;•the employment of aligned speech text as input through apre-trained CLIP model, thus supporting the generation ofgestures semantically related to speech;•the use of speech audio representations suited for content-related tasks from a pre-trained WavLM model.Our code can be accessed via https://github.com/AI-Unicamp/ggvad-genea2023.This article is structured as follows: Section 2 presents relatedworks on gesture generation and diffusion; the data processing isICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.detailed in Section 3; Section 4 describes the proposed model andqualitative evaluations of our model are presented in Section 5;the results of the proposed model compared to other entries to theGENEA Challenge 2023 are detailed in Section 6; and Section 7presents the conclusion and final remarks.2 BACKGROUND AND PRIOR WORKGenerative models enable the capture of the one-to-many natureof gestures. Studies using VAEs [ 9], GANs [ 8], and NormalizingFlows [ 10] show that such models surpass deterministic ones. How-ever, these approaches still suffer from generalized problems suchas mean pose convergence and training instability. Recently, diffu-sion models arose as a new class of promising generative modelsachieving state-of-the-art results across a wide range of multimodaltasks validated by perceptual evaluations without the same pitfallsas the generative models mentioned before. Additionally, thesemodels were shown to be capable of handling data with specialstructures, efficient sampling and providing improved likelihoodestimation [31].Denoising Diffusion Probabilistic Models (DDPMs) [ 12] are atype of generative model that synthesize new samples from anunderlying data distribution by learning how to reconstruct infor-mation. During the training process, the model takes one noisy datapoint ( xt), obtained by applying tGaussian noise addition steps tothe original data ( x), with 0<t≤T, asTis the size of the completediffusion noise-adding chain, and is set to equivalently predict eithera one-step denoised sample ( xt−1), a fully reconstructed data point(x0), or the noise contained ( ε). On inference, the process is startedfrom a pure Gaussian noise distribution and the reconstruction isperformed iteratively Ttimes, generating a new sample [12].Diffusion models exhibited state-of-the-art performance in sev-eral different tasks. On image synthesis, diffusion models achievedsuperior performance to the at the time GAN-based state-of-the-art synthesis [ 7], and were also proven to be able to generate andedit hyper-realistic images [ 22,25]. In the audio domain, diffusionmodels have been successfully exploited for audio generation [ 15]and text-to-audio [ 19] tasks, obtaining higher performance whencompared to other current staple models. Recently, diffusion modelshave also been explored on the task of video generation, whichwere demonstrated to synthesize high-fidelity videos with a highdegree of controllability and world knowledge [11].In the context of human motion generation, text-based modelsaim to control the movements via natural language semantically.The MotionDiffuse model [ 35] is the first model to exploit DDPMsfor this task, combining these models with a cross-modal Trans-former based architecture. In another approach, denominated Mo-tion Diffusion Model (MDM) [ 26], textual representations extractedfrom a pre-trained CLIP [ 24] are combined with a Transformermodel in a classifier-free guidance diffusion training process [ 13].Other works tackle the dance generation task, which intends togenerate dances given music as audio input. The EDGE [ 27] methodpairs a diffusion model with Jukebox, a generative model for music,whereas the Listen, Denoise and Action! [ 1] model adapts Dif-fWave [ 15] to generate poses and synthesize dances in variousstyles.More recently, diffusion models have also been applied to thegesture generation task. DiffMotion [ 34] is the first approach thatapplies DDPMs to generate gestures. It leverages an autoregressivetemporal encoder based on an LSTM that processes context repre-sented by spectral audio features and previous poses to condition adiffusion process, generating each pose individually.The DiffGesture [ 37] model uses a convolutional audio encoderto extract representations directly from the raw audio. A Trans-former model then uses these representations that undergoes animplicit classifier-free guidance diffusion training.The GestureDiffuCLIP [ 2] model introduces a multimodal (text,motion or video) prompt-conditioned style-controlled gesture gen-eration via mode-specific pre-trained CLIP encoders. Also, they usea contrastive learning strategy to learn semantic correspondencesbetween textual transcripts of the input speech and gestures, al-lowing for the generation of semantically-aware gestures. Thesecontributions, along with a denoiser network based on Transform-ers, attention, and AdaIN layers [ 14] to incorporate style guidance,compose a latent diffusion training process [25].Finally, the DiffuseStyleGesture [ 32] model combines layers ofcross-local and global attention to better capture the localized as-pects of gestures. With representations extracted from the self-supervised WavLM model [ 6], the authors perform a diffusion train-ing process and are able to generate and control gestures based ona style label.Although the increasing interest in the field, the synthesizedmotions from most models are still far from being indistinguishablefrom real human motion [ 33]. Moreover, research often concen-trates on monadic scenarios in which only one participant activelycommunicates. Consequently, crucial behaviours of real-life inter-actions, such as listening, reciprocal expression, and interruptions,are disregarded during development and evaluation.3 DATA AND DATA PROCESSINGThe dataset used by the 2023 GENEA Challenge is an adaptation ofthe Talking With Hands 16.2M (TWH) data [ 18]. Pre-processing,data augmentation, and selection are described in the challenge’smain paper [ 17]. The available dataset presents a dyadic scenario,i.e., it is composed of data from two people having a conversation,referred to as the main agent and interlocutor. Entries to the chal-lenge should only generate movements for the main agent, andusing the interlocutor’s data was optional. Available data includesmotion, speech audio, speech text (audio transcripts with times-tamps), and speaker label. We only used data from the main agent;thus, our model depends on monadic information alone despite thedyadic scenario. Speaker labels were also ignored.The dataset motions are BVH files with movements composed of30 poses per second represented by Euler angles. We extracted eachpose and composed a feature vector g=[ρp,¤ρp,ρr,¤ρr]whereρp∈R3jand¤ρp∈R3jare the global 3D joint positions andpositional velocities, ρr∈R6jand¤ρr∈R3jare the local 6Djoint rotations [ 36] and the local 3D joint rotational velocities, jrepresents the number of joints. The 30 frames per second rate ofthe original data and all 83 joints of the skeleton were preserved,thus g∈R1245for each pose. Each dimension of motion datais normalized to zero mean and unit standard deviation over theGesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, Francechallenge training set. Audio files were resampled from 44.1 kHz to16 kHz.4 METHODOur approach consists of a combination of the MDM [ 26] andthe DiffuseStyleGestures [ 32] models, with modifications aimingfor improved responsiveness of gestures given speech audio. Thearchitecture is shown in Figure 1. Our model generates sequences of120 poses simultaneously, corresponding to 4 seconds. We considerinputs to be divided into global and fine-grained information. Thefirst corresponds to information relevant to the 4-second sequenceas a whole, which includes the words spoken (text), seed poses, andtimestep embedding. On the other hand, fine-grained informationis considered to be relevant at the frame level; thus, it includesaudio and speech activity.4.1 Global InformationSince gestures can be semantically related to speech, providingtext information could improve gesture appropriateness. As textualfeatures, we use spoken words within a motion sequence. Wordstimestamps from the audio transcript are used for extracting thecorresponding words. As in the MDM [ 26] model, the speech textcontained in the sequence of poses passes by a pre-trained CLIP [ 24]model1and then processed from the clip output dimension of 512to a dimension of 64 by a fully connected layer.For the motion between consecutive generated sequences tohave cohesion, 10 previous seed poses are used as conditional input.These poses are flattened and then projected to a dimension of192, and then concatenated with the textual information, forminga vector with the defined latent dimension of 256. Additionally,the timestep embedding of the diffusion process, which indicateswhich denoising step is being performed, is a sinusoidal positionalembedding that is passed through two fully connected layers witha Sigmoid Linear Unit (SiLU) activation layer in between and pro-jected to latent dimension. With this, the embedding that representsglobal conditioning information (the one that is invariant to thepose sequence) is obtained by summing the time-step embeddingwith the concatenation of the textual and seed poses embedding.4.2 Fine-grained InformationWe work with chunks of sequences of 120 poses corresponding to4 seconds of motion. The noisy poses for the diffusion process areobtained by adding tsteps of Gaussian noise on a sequence. Theseposes are then projected via a linear layer from the pose dimensionof 1245 to the latent space dimension. For the audio information,we use the resampled audio data and pass it through the WavLM [ 6]model2. Differently from the DiffuseStyleGestures [ 32], we use therepresentations extracted from the 11th layer instead of the 12th.The 11th layer is reported to perform better at content-related tasks,such as phoneme recognition and automatic speech recognition.These representations are first interpolated to match the length ofthe corresponding pose sequence and then projected to a dimensionof 64 by a linear layer.1Version ‘ViT-B/32’ obtained from https://github.com/openai/CLIP2Version ‘Base+’ obtained from https://github.com/microsoft/unilm/tree/master/wavlmFigure 1: Model architecture.4.2.1 Speech Activity Information. Due to the dyadic nature of thedataset, some sections of the data are composed of moments inwhich the main agent is not the active speaker, such as listeningand turn-taking moments. Gestures performed in active or non-active moments may play different roles in human interaction and,thus, differ from those performed in other moments. For example,beat gestures occur during articulations of speech and may serveto emphasize what is being said [ 21]; differently, mimicry, oftenperformed automatically, may enhance helpfulness and strengthensocial bonds [ 28]. Although our model only uses monadic data, weintroduce the use of speech activity information. This information,otherwise embedded in audio representations such as spectrogramsand MFCCs, may be lost in the abstract WavLM representations.Furthermore, the interpolation of representations to match the posesequence can blend moments with and without speech activity.Thus, the contribution of such inclusion is believed to be two-fold.First, it provides more straightforward access to fine-grained speechenergy. Second, it helps to stress, during training, the differencebetween gestures in the aforementioned moments, not in terms offunctionality, but dynamics.Speech activity can be inferred through analytical approachessuch as energy and F0. However, the dataset audios contain noisethat could affect computing these parameters: various speakers,different speech volumes, and background noise such as speechfrom the interlocutor and breathing. Thus, we consider two sce-narios for acquiring speech activity information. The first is basedon a pre-trained Voice Activity Detector (VAD)3that consists in3Obtained from https://huggingface.co/speechbrain/vad-crdnn-libripartyICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.a small CRDNN (A combination of convolutional, recurrent anddeep neural network) trained on the Libriparty dataset4, which isa synthetic cocktail-party scenario derived from the Librispeechdataset. When speech is detected, the model outputs a 1 and other-wise a 0. The second approach is taken from the annotated speechtext timestamps provided in the dataset. When there is any text,we consider the respective timestamps as 1 and otherwise as 0.The major difference between these approaches is that the pre-trained model can detect intra-text pauses, whereas audio tran-scripts provide word-level timestamps granularity. A comparisonof both is shown in Figure 2. From the figure, it is noticeable thatVAD provides closer alignment with speech energy. Besides, thepre-trained VAD removes the need for audio-aligned annotatedspeech text, which is sensitive to human perception or error.Figure 2: Scaled speech activities from timed audio tran-scripts (red) and from the VAD (black) overlapped with aspectrogram of an eight-second audio sample in the back-ground.The speech information sequence extracted from the VAD isused to select two embeddings with latent dimensions representingthe presence of speech or no speech for each pose. This sequenceof embeddings is then concatenated with the noisy poses and theaudio embeddings forming the fine-grained information.4.3 TrainingThe fine-grained information is concatenated with the global infor-mation along the latent dimension. Then, all the input informationis projected back to the latent dimension by an input linear layerand fed to the cross-local attention layer to capture local relationsbetween the features. Then, we concatenate the global informationembedding one more time with the output along the sequence di-mension before passing the sequence to the transformer encoderto capture the global context. Then, we ignore the first token ofthe output sequence and project the outputs to the pose dimen-sion, which finally represents the denoised pose ( x0) itself. We usepositional embeddings to add sequence information on both thecross-local attention and the transformer encoder.On inference, a sequence at a time is generated. The modeloutputs a vector G=[g1,g2,···,g120]. The last 10 poses from thepreviously generated sequence are used to condition the generationof the next sequence; mean poses are used for conditioning the firstsequence.4https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriParty/generate_datasetFor post-processing, we use linear interpolation to impose conti-nuity between successive sequences. To smooth motion artifactsin the output, we also apply a Savitzky-Golay [ 20] filter with awindow length of 9 and polynomial order of 3.The model was trained for 290k steps, with a batch size of 64, ina single NVIDIA Titan Xp GPU, which took about 2.5 days.5 EVALUATIONThere still is no objective metric to measure gesture perceptionreliably. Moreover, previous research has found that object met-rics differ from subjective ones [ 16]. Therefore, the research teamempirically evaluated the proposed model, its variations, and thereference models through visual inspection of their outputs.We trained the MDM [ 26] and the DiffuseStyleGestures [ 32]and used them as references for comparison, i.e., a starting pointfor development. Although providing reasonable human-like mo-tion, in terms of appropriateness to speech, we found the resultsunsatisfactory. The outputs seemed unaware of moments such asbrief pauses, turn-taking, and listening moments. That is, the agentwould frequently make gestures in those moments that appearedinadequate and similar to behaviours performed when it was theactive speaker. So, our main focus in developing the model for theGENEA Challenge 2023 was to overcome those issues of disregardfor no-speak moments. Thus, a VAD was employed to leveragespeech activity information.Figure 3: Histograms of the rotational velocities from themain agent’s left and right forearm joints from the trainingset of the dataset (top), and the output of the proposed modelwith (bottom). Red and black indicate velocities extractedwhen the main agent was the active speaker and when it wasnot.Gesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, FranceIn order to examine the effectiveness of the VAD, we presenthistograms of the rotational velocities of the forearms, a joint that isvery active when gesturing with the arms, on Figure 3, for the realtraining set (top), and the output of the proposed model (bottom).The figure splits each set considered in two distributions: when theVAD indicates that there is an occurrence of speech, VAD outputequals one, and when the VAD indicates that there is no speech, itsoutput is zero.For the training set, the histograms reveal distinct patterns as-sociated with speech activity during gesticulation. Speakers in thedataset exhibit increased forearm movements while talking versussilence periods. These insights support our underlying assumptionthat people tend to perform more gestures - or at least more abruptgestures - when they are speaking.The proposed model could reproduce, to some extent, the overallbehaviour of the training set. However, it was unable to synthesizemotion that reproduced the differences seen in the training set givenspeech activity, that is, a larger concentration of higher velocitieswhen the agent is speaking. We did an ablation study with theproposed model without the VAD module. Its histogram was similarto the one with VAD. However, visual inspections of the outputsby the research team favored outputs by the proposed model withVAD in terms of speech and gesture alignment.We also compared outputs from models with and without textinput. However, we did not find a significant amount of semanticallyrelated gestures in their output. Further investigation should becarried out to indicate if there is a sufficient amount of such gesturesin the dataset for models to be able to learn from. Still, we kepttexts as input as motion quality was not impaired.Compared to our reference models, the output of the proposedmodel seems better, especially in terms of speech audio and gesturealignment. However, we notice that some artifacts are still presentin the motions. Motions occasionally converge to an unusual orodd-looking pose, absurd rotations still take place, and jittering issometimes noticeable.6 RESULTS AND DISCUSSIONThe results of the shared evaluations of the GENEA Challenge 2023indicated that our model (condition SH) is competitive with mostconditions in terms of human-likeness but obtained relatively poorresults for appropriateness to speech [17].Figure 4 presents human-likeness ratings. Subjects participantsgave their ratings based on how human-like the motions appeared,from 0 (worst) to 100 (best). Real motion data (NA) achieved amedian rating of 71, the baselines 46 (BD) and 43 (BM), while ourcondition scored 46. We believe that the module that contributed themost to the human-likeness of generated gestures is the attentionmechanism. As Yang et al. [ 32] showed in their ablation studies,the cross-local attention module played a significant role in termsof human-likeness ratings.Two evaluations were performed to assess gesture appropri-ateness to speech: appropriateness for agent speech and for theinterlocutor speech. The first contains mainly moments where themain agent is the active speaker, while the roles are reversed in thelatter.Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 4: Shared human-likeness rating study. Red bars aremedian ratings; yellow diamonds are mean ratings. Entriesto the challenge are labeled SA-SL (ours is SH), BD and BMare the baselines [ 5], and NA is real data. Extracted fromKucherenko et al. [17].NA SG SJBM SFSK SESDBD SISLSBSASHSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 5: Shared appropriateness for agent speech responses.Entries to the challenge are labeled SA-SL (ours is SH), BDand BM are the baselines [ 5], and NA is real data. Extractedfrom Kucherenko et al. [17].For the appropriateness of agent speech evaluation, subjectswere presented with speech audio and two motions generated bythe model. One motion is the output generated with the speechaudio presented as input, and the other is the output from anothersegment of speech audio. For our condition, subjects preferred thematching motion 52.9% of the time, slightly above chance. Althoughone of the lowest mean appropriateness scores, there is no stati-cally significant differences in the scores of ours and another tenconditions (conditions BM to SA, in Figure 5).Our condition had the lowest score in the appropriateness forthe interlocutor evaluation. This means that subjects found themismatched stimuli more appropriate. However, our model does notuse any interlocutor information as input. Thus, from the model’sICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 6: Shared appropriateness for the interlocutor re-sponses. Entries to the challenge are labeled SA-SL (ours isSH), BD and BM are the baselines, and NA is real data. Ex-tracted from Kucherenko et al. [17].perspective, the output of both matched and mismatched stimulifor this evaluation was generated using the same inputs. Evidently,it is not expected that the outputs be exactly the same due to theprobabilistic nature of the model. But both outputs are expectedto be equivalent in terms of human-likeness and appropriateness,thus scoring similarly to chance (50%).We also noticed from all three evaluations that our model had awide range of scores. For instance, whiskers from the box plot visu-alization of Figure 4 span almost the entire y-axis; our condition,along with condition SK, had the highest confidence intervals ofmedian and mean ratings. In the appropriateness for agent speech,our condition had the third highest number of clear preferences formatched stimuli, the highest for mismatched, and the second lowestfor no preferences when compared to other entries to the challenge.Thus, we argue that the proposed model is indeed capable of gen-erating gestures that are competitive in terms of human-likenessand appropriateness for the main agent. However, the artifactsmentioned in the previous section hinder gesture perception andshould be addressed before any conclusion regarding the proposedarchitecture and individual modules.7 CONCLUSIONThis paper describes the proposed diffusion-based model for ges-ture generation that uses pre-trained VAD. Incorporating speechactivity information in such models could improve responsivenessduring rapid back-and-forth interactions. Also, a VAD can explicitlyprovide this information without needing human-annotated tran-scripts, thus potentially suited for real-time dialogue. Our model hasbeen compared with others in the GENEA Challenge 2023, a crowd-sourced evaluation that directly compares different methods whilecontrolling factors such as data and evaluation methodology. Theevaluation showed that our model is compatible with other entriesto the challenge in terms of human-likeness, but appropriatenessto speech is still unimpressive despite our efforts.Our experiments revealed mixed results regarding the effective-ness of the proposed implementation improvements to the gesturegeneration system. While convergences to undesired poses, extremejoint rotations, and jittering were not frequent, they nonetheless oc-curred. Besides, output motion was unstable, i.e., when generatingmotions given the same inputs, the resulting motion quality variedgreatly. These issues may have contributed to subpar performancein evaluations and compromised the responsiveness of generatedgestures to speaking moments. Although our adaptations hold po-tential value for gesture generation tasks, further improvements areneeded to leverage their benefits fully. Especially the explicit useof speech activity information that could be leveraged to addressturn-taking momentsWe intend to focus primarily on improving speech and gesturealignment for future work. An interesting approach is adapting anexternal framework for alignment as the one proposed by Badlaniet al. [ 3]. Another obvious path is to incorporate data from theinterlocutor to capture the aspects of dyadic scenarios.ACKNOWLEDGMENTSThis study was partially funded by the Coordenação de Aperfeiçoa-mento de Pessoal de Nivel Superior – Brasil (CAPES) – FinanceCode 001. The first author is grateful to the Eldorado ResearchInstitute.
VjYgCbk76N
The main advantage of this work is the innovative use of diffusion models and relevant auxiliary information to improve the correlation and semantic coherence between speech and gestures, leading to more natural and appropriate gesture generation.Although the results are fair, the analysis is detailed. I recommend accepting it after modification.
6: Marginally above acceptance threshold
Abstract: The article presents a gesture generation method based on diffusion models. The proposed approach utilizes a pre-trained Voice Activity Detector (VAD), meaningful audio representations, and textual information to generate responsive and natural-looking gestures. The method was evaluated in the GENEA Challenge 2023 and achieved fair results concerning human-likeness. Overall, the proposed method showcases innovation in the field of gesture generation and has the potential to enhance the naturalness and appropriateness of generated gestures. Review Feedback: (1)Please explain in detail the meaning of Figure 2 in the text and explain why only VAD was finally used. (2)Although the network uses speech activity indicator model to identify speech interruptions, the gestures may not necessarily stationary correspond to those periods of silence, potentially confusing the network. Is it possible to consider adjusting the gestures at VAD=0 in the data preprocessing. (3)From Figure 3, it can be observed that the occurrences of vad=0 at offsets like 80 and 100 did not decrease significantly compared to the occurrences of vad=1. I would like to know if this satisfies the author's original intention in observations, such as the absence of movement during pauses. (4)Compared to the original disunion model, does the inclusion of this additional information result in objective and subjective improvements? This is not explicitly demonstrated in the article. (5)The author utilizes a training approach where sequences of 120 poses are trained at once. How was this segmentation considered? Does this segmentation (grouping every 120 frames) disrupt the temporal coherence of the original data? I hope the above feedback can assist you in improving your work.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
S9Efb3MoiZ
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Gesture Generation with Diffusion Models Aided by Speech Activity Information
["Rodolfo Luis Tonoli", "Leonardo Boulitreau de Menezes Martins Marques", "Lucas Hideki Ueda", "Paula Dornhofer Paro Costa"]
This paper describes a gesture generation model based on state-of-the-art diffusion models. Novel adaptations were introduced to improve motion appropriateness relative to speech and human-likeness. Specifically, the main focus was to enhance gesture responsiveness to speech audio. In particular, we explored using a pre-trained Voice Activity Detector (VAD) to obtain more meaningful audio representations. The proposed model was submitted to the GENEA Challenge 2023. Perceptual experiments compared our model, labeled SH, with other submissions to the challenge. The results indicated that our model achieved competitive levels of human-likeness. While appropriateness to the agent's speech score was lower than most entries, there were no statistically significant differences from most models at the confidence level.
["Gesture generation", "co-speech gestures", "diffusion models"]
ABSTRACTThis paper describes a gesture generation model based on state-of-the-art diffusion models. Novel adaptations were introduced toimprove motion appropriateness relative to speech and human-likeness. Specifically, the main focus was to enhance gesture re-sponsiveness to speech audio. We explored using a pre-trainedVoice Activity Detector (VAD) to obtain more meaningful audiorepresentations. The proposed model was submitted to the GE-NEA Challenge 2023. Perceptual experiments compared our model,labeled SH, with other submissions to the challenge. The resultsindicated that our model achieved competitive levels of human-likeness. While appropriateness to the agent’s speech score waslower than most entries, there were no statistically significant dif-ferences from most models at the confidence level.CCS CONCEPTS•Computing methodologies →Animation ;Intelligent agents ;Machine learning.KEYWORDSGesture generation, co-speech gestures, diffusion modelsACM Reference Format:Rodolfo L. Tonoli, Leonardo B. de M. M. Marques, Lucas H. Ueda, and Paula D.P. Costa. 2023. Gesture Generation with Diffusion Models Aided by SpeechActivity Information. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM,New York, NY, USA, 7 pages. https://doi.org/10.1145/3610661.36165541 INTRODUCTIONHuman communication is composed of verbal and nonverbal be-haviours. Co-speech gestures are one of these behaviours. They are∗Both authors contributed equally to this research.†Also with Artificial Intelligence Lab., Recod.ai, Institute of Computing, University ofCampinas, SP, Brazil..Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616554visible actions of any body part produced while speaking and mayserve different purposes, such as to provide emphasis or to depictsome physical property [ 30]. Being such a key part of human com-munication, gestures are employed in embodied agents to simulatereal interactions and create believable characters [ 29]. Otherwise,these agents may be perceived as lifeless or dull.Recent research focused on automatic gesture generation (orsynthesis) through deep learning. Such systems are able to ani-mate embodied agents much faster and less time-demanding thantraditional techniques such as hand-crafted animations or motioncapture. Additionally, these techniques may not be suited for appli-cations whose speech content is unknown beforehand, such as anavatar being controlled by a human or an embodied agent poweredby a language model.Most research on gesture generation has a cross-modal mappingapproach to this problem, similar to a translation between differentbehaviour modalities [ 4]. Also, gestures are correlated with prosodyand may be associated with semantics [ 21]. Thus, most systems usespeech audio, speech text, or both to guide gesture generation [ 23].However, synthetic data still struggles to appear human-like andappropriate to speech if compared to real human data [ 33]. Morechallenging scenarios could widen the gap between synthetic andreal data. For example, in dyadic interactions, people are expected totake turns being the active speaker for brief or long moments. Mostresearch has not addressed such situations. We propose a monadicgesture generation model that considers the voice activity for betteralignment and responsiveness of gestures given speech audio. Themodel is based on a composition of the DiffuseStyleGesture [ 32],a speech-driven diffusion model and the Motion Diffusion Model(MDM) [ 26], which is text-driven. The main contributions of thispaper to the aforementioned models are:•the integration of voice activity information to improveturn-taking and speech audio synchrony while using onlymonadic inputs;•the employment of aligned speech text as input through apre-trained CLIP model, thus supporting the generation ofgestures semantically related to speech;•the use of speech audio representations suited for content-related tasks from a pre-trained WavLM model.Our code can be accessed via https://github.com/AI-Unicamp/ggvad-genea2023.This article is structured as follows: Section 2 presents relatedworks on gesture generation and diffusion; the data processing isICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.detailed in Section 3; Section 4 describes the proposed model andqualitative evaluations of our model are presented in Section 5;the results of the proposed model compared to other entries to theGENEA Challenge 2023 are detailed in Section 6; and Section 7presents the conclusion and final remarks.2 BACKGROUND AND PRIOR WORKGenerative models enable the capture of the one-to-many natureof gestures. Studies using VAEs [ 9], GANs [ 8], and NormalizingFlows [ 10] show that such models surpass deterministic ones. How-ever, these approaches still suffer from generalized problems suchas mean pose convergence and training instability. Recently, diffu-sion models arose as a new class of promising generative modelsachieving state-of-the-art results across a wide range of multimodaltasks validated by perceptual evaluations without the same pitfallsas the generative models mentioned before. Additionally, thesemodels were shown to be capable of handling data with specialstructures, efficient sampling and providing improved likelihoodestimation [31].Denoising Diffusion Probabilistic Models (DDPMs) [ 12] are atype of generative model that synthesize new samples from anunderlying data distribution by learning how to reconstruct infor-mation. During the training process, the model takes one noisy datapoint ( xt), obtained by applying tGaussian noise addition steps tothe original data ( x), with 0<t≤T, asTis the size of the completediffusion noise-adding chain, and is set to equivalently predict eithera one-step denoised sample ( xt−1), a fully reconstructed data point(x0), or the noise contained ( ε). On inference, the process is startedfrom a pure Gaussian noise distribution and the reconstruction isperformed iteratively Ttimes, generating a new sample [12].Diffusion models exhibited state-of-the-art performance in sev-eral different tasks. On image synthesis, diffusion models achievedsuperior performance to the at the time GAN-based state-of-the-art synthesis [ 7], and were also proven to be able to generate andedit hyper-realistic images [ 22,25]. In the audio domain, diffusionmodels have been successfully exploited for audio generation [ 15]and text-to-audio [ 19] tasks, obtaining higher performance whencompared to other current staple models. Recently, diffusion modelshave also been explored on the task of video generation, whichwere demonstrated to synthesize high-fidelity videos with a highdegree of controllability and world knowledge [11].In the context of human motion generation, text-based modelsaim to control the movements via natural language semantically.The MotionDiffuse model [ 35] is the first model to exploit DDPMsfor this task, combining these models with a cross-modal Trans-former based architecture. In another approach, denominated Mo-tion Diffusion Model (MDM) [ 26], textual representations extractedfrom a pre-trained CLIP [ 24] are combined with a Transformermodel in a classifier-free guidance diffusion training process [ 13].Other works tackle the dance generation task, which intends togenerate dances given music as audio input. The EDGE [ 27] methodpairs a diffusion model with Jukebox, a generative model for music,whereas the Listen, Denoise and Action! [ 1] model adapts Dif-fWave [ 15] to generate poses and synthesize dances in variousstyles.More recently, diffusion models have also been applied to thegesture generation task. DiffMotion [ 34] is the first approach thatapplies DDPMs to generate gestures. It leverages an autoregressivetemporal encoder based on an LSTM that processes context repre-sented by spectral audio features and previous poses to condition adiffusion process, generating each pose individually.The DiffGesture [ 37] model uses a convolutional audio encoderto extract representations directly from the raw audio. A Trans-former model then uses these representations that undergoes animplicit classifier-free guidance diffusion training.The GestureDiffuCLIP [ 2] model introduces a multimodal (text,motion or video) prompt-conditioned style-controlled gesture gen-eration via mode-specific pre-trained CLIP encoders. Also, they usea contrastive learning strategy to learn semantic correspondencesbetween textual transcripts of the input speech and gestures, al-lowing for the generation of semantically-aware gestures. Thesecontributions, along with a denoiser network based on Transform-ers, attention, and AdaIN layers [ 14] to incorporate style guidance,compose a latent diffusion training process [25].Finally, the DiffuseStyleGesture [ 32] model combines layers ofcross-local and global attention to better capture the localized as-pects of gestures. With representations extracted from the self-supervised WavLM model [ 6], the authors perform a diffusion train-ing process and are able to generate and control gestures based ona style label.Although the increasing interest in the field, the synthesizedmotions from most models are still far from being indistinguishablefrom real human motion [ 33]. Moreover, research often concen-trates on monadic scenarios in which only one participant activelycommunicates. Consequently, crucial behaviours of real-life inter-actions, such as listening, reciprocal expression, and interruptions,are disregarded during development and evaluation.3 DATA AND DATA PROCESSINGThe dataset used by the 2023 GENEA Challenge is an adaptation ofthe Talking With Hands 16.2M (TWH) data [ 18]. Pre-processing,data augmentation, and selection are described in the challenge’smain paper [ 17]. The available dataset presents a dyadic scenario,i.e., it is composed of data from two people having a conversation,referred to as the main agent and interlocutor. Entries to the chal-lenge should only generate movements for the main agent, andusing the interlocutor’s data was optional. Available data includesmotion, speech audio, speech text (audio transcripts with times-tamps), and speaker label. We only used data from the main agent;thus, our model depends on monadic information alone despite thedyadic scenario. Speaker labels were also ignored.The dataset motions are BVH files with movements composed of30 poses per second represented by Euler angles. We extracted eachpose and composed a feature vector g=[ρp,¤ρp,ρr,¤ρr]whereρp∈R3jand¤ρp∈R3jare the global 3D joint positions andpositional velocities, ρr∈R6jand¤ρr∈R3jare the local 6Djoint rotations [ 36] and the local 3D joint rotational velocities, jrepresents the number of joints. The 30 frames per second rate ofthe original data and all 83 joints of the skeleton were preserved,thus g∈R1245for each pose. Each dimension of motion datais normalized to zero mean and unit standard deviation over theGesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, Francechallenge training set. Audio files were resampled from 44.1 kHz to16 kHz.4 METHODOur approach consists of a combination of the MDM [ 26] andthe DiffuseStyleGestures [ 32] models, with modifications aimingfor improved responsiveness of gestures given speech audio. Thearchitecture is shown in Figure 1. Our model generates sequences of120 poses simultaneously, corresponding to 4 seconds. We considerinputs to be divided into global and fine-grained information. Thefirst corresponds to information relevant to the 4-second sequenceas a whole, which includes the words spoken (text), seed poses, andtimestep embedding. On the other hand, fine-grained informationis considered to be relevant at the frame level; thus, it includesaudio and speech activity.4.1 Global InformationSince gestures can be semantically related to speech, providingtext information could improve gesture appropriateness. As textualfeatures, we use spoken words within a motion sequence. Wordstimestamps from the audio transcript are used for extracting thecorresponding words. As in the MDM [ 26] model, the speech textcontained in the sequence of poses passes by a pre-trained CLIP [ 24]model1and then processed from the clip output dimension of 512to a dimension of 64 by a fully connected layer.For the motion between consecutive generated sequences tohave cohesion, 10 previous seed poses are used as conditional input.These poses are flattened and then projected to a dimension of192, and then concatenated with the textual information, forminga vector with the defined latent dimension of 256. Additionally,the timestep embedding of the diffusion process, which indicateswhich denoising step is being performed, is a sinusoidal positionalembedding that is passed through two fully connected layers witha Sigmoid Linear Unit (SiLU) activation layer in between and pro-jected to latent dimension. With this, the embedding that representsglobal conditioning information (the one that is invariant to thepose sequence) is obtained by summing the time-step embeddingwith the concatenation of the textual and seed poses embedding.4.2 Fine-grained InformationWe work with chunks of sequences of 120 poses corresponding to4 seconds of motion. The noisy poses for the diffusion process areobtained by adding tsteps of Gaussian noise on a sequence. Theseposes are then projected via a linear layer from the pose dimensionof 1245 to the latent space dimension. For the audio information,we use the resampled audio data and pass it through the WavLM [ 6]model2. Differently from the DiffuseStyleGestures [ 32], we use therepresentations extracted from the 11th layer instead of the 12th.The 11th layer is reported to perform better at content-related tasks,such as phoneme recognition and automatic speech recognition.These representations are first interpolated to match the length ofthe corresponding pose sequence and then projected to a dimensionof 64 by a linear layer.1Version ‘ViT-B/32’ obtained from https://github.com/openai/CLIP2Version ‘Base+’ obtained from https://github.com/microsoft/unilm/tree/master/wavlmFigure 1: Model architecture.4.2.1 Speech Activity Information. Due to the dyadic nature of thedataset, some sections of the data are composed of moments inwhich the main agent is not the active speaker, such as listeningand turn-taking moments. Gestures performed in active or non-active moments may play different roles in human interaction and,thus, differ from those performed in other moments. For example,beat gestures occur during articulations of speech and may serveto emphasize what is being said [ 21]; differently, mimicry, oftenperformed automatically, may enhance helpfulness and strengthensocial bonds [ 28]. Although our model only uses monadic data, weintroduce the use of speech activity information. This information,otherwise embedded in audio representations such as spectrogramsand MFCCs, may be lost in the abstract WavLM representations.Furthermore, the interpolation of representations to match the posesequence can blend moments with and without speech activity.Thus, the contribution of such inclusion is believed to be two-fold.First, it provides more straightforward access to fine-grained speechenergy. Second, it helps to stress, during training, the differencebetween gestures in the aforementioned moments, not in terms offunctionality, but dynamics.Speech activity can be inferred through analytical approachessuch as energy and F0. However, the dataset audios contain noisethat could affect computing these parameters: various speakers,different speech volumes, and background noise such as speechfrom the interlocutor and breathing. Thus, we consider two sce-narios for acquiring speech activity information. The first is basedon a pre-trained Voice Activity Detector (VAD)3that consists in3Obtained from https://huggingface.co/speechbrain/vad-crdnn-libripartyICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.a small CRDNN (A combination of convolutional, recurrent anddeep neural network) trained on the Libriparty dataset4, which isa synthetic cocktail-party scenario derived from the Librispeechdataset. When speech is detected, the model outputs a 1 and other-wise a 0. The second approach is taken from the annotated speechtext timestamps provided in the dataset. When there is any text,we consider the respective timestamps as 1 and otherwise as 0.The major difference between these approaches is that the pre-trained model can detect intra-text pauses, whereas audio tran-scripts provide word-level timestamps granularity. A comparisonof both is shown in Figure 2. From the figure, it is noticeable thatVAD provides closer alignment with speech energy. Besides, thepre-trained VAD removes the need for audio-aligned annotatedspeech text, which is sensitive to human perception or error.Figure 2: Scaled speech activities from timed audio tran-scripts (red) and from the VAD (black) overlapped with aspectrogram of an eight-second audio sample in the back-ground.The speech information sequence extracted from the VAD isused to select two embeddings with latent dimensions representingthe presence of speech or no speech for each pose. This sequenceof embeddings is then concatenated with the noisy poses and theaudio embeddings forming the fine-grained information.4.3 TrainingThe fine-grained information is concatenated with the global infor-mation along the latent dimension. Then, all the input informationis projected back to the latent dimension by an input linear layerand fed to the cross-local attention layer to capture local relationsbetween the features. Then, we concatenate the global informationembedding one more time with the output along the sequence di-mension before passing the sequence to the transformer encoderto capture the global context. Then, we ignore the first token ofthe output sequence and project the outputs to the pose dimen-sion, which finally represents the denoised pose ( x0) itself. We usepositional embeddings to add sequence information on both thecross-local attention and the transformer encoder.On inference, a sequence at a time is generated. The modeloutputs a vector G=[g1,g2,···,g120]. The last 10 poses from thepreviously generated sequence are used to condition the generationof the next sequence; mean poses are used for conditioning the firstsequence.4https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriParty/generate_datasetFor post-processing, we use linear interpolation to impose conti-nuity between successive sequences. To smooth motion artifactsin the output, we also apply a Savitzky-Golay [ 20] filter with awindow length of 9 and polynomial order of 3.The model was trained for 290k steps, with a batch size of 64, ina single NVIDIA Titan Xp GPU, which took about 2.5 days.5 EVALUATIONThere still is no objective metric to measure gesture perceptionreliably. Moreover, previous research has found that object met-rics differ from subjective ones [ 16]. Therefore, the research teamempirically evaluated the proposed model, its variations, and thereference models through visual inspection of their outputs.We trained the MDM [ 26] and the DiffuseStyleGestures [ 32]and used them as references for comparison, i.e., a starting pointfor development. Although providing reasonable human-like mo-tion, in terms of appropriateness to speech, we found the resultsunsatisfactory. The outputs seemed unaware of moments such asbrief pauses, turn-taking, and listening moments. That is, the agentwould frequently make gestures in those moments that appearedinadequate and similar to behaviours performed when it was theactive speaker. So, our main focus in developing the model for theGENEA Challenge 2023 was to overcome those issues of disregardfor no-speak moments. Thus, a VAD was employed to leveragespeech activity information.Figure 3: Histograms of the rotational velocities from themain agent’s left and right forearm joints from the trainingset of the dataset (top), and the output of the proposed modelwith (bottom). Red and black indicate velocities extractedwhen the main agent was the active speaker and when it wasnot.Gesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, FranceIn order to examine the effectiveness of the VAD, we presenthistograms of the rotational velocities of the forearms, a joint that isvery active when gesturing with the arms, on Figure 3, for the realtraining set (top), and the output of the proposed model (bottom).The figure splits each set considered in two distributions: when theVAD indicates that there is an occurrence of speech, VAD outputequals one, and when the VAD indicates that there is no speech, itsoutput is zero.For the training set, the histograms reveal distinct patterns as-sociated with speech activity during gesticulation. Speakers in thedataset exhibit increased forearm movements while talking versussilence periods. These insights support our underlying assumptionthat people tend to perform more gestures - or at least more abruptgestures - when they are speaking.The proposed model could reproduce, to some extent, the overallbehaviour of the training set. However, it was unable to synthesizemotion that reproduced the differences seen in the training set givenspeech activity, that is, a larger concentration of higher velocitieswhen the agent is speaking. We did an ablation study with theproposed model without the VAD module. Its histogram was similarto the one with VAD. However, visual inspections of the outputsby the research team favored outputs by the proposed model withVAD in terms of speech and gesture alignment.We also compared outputs from models with and without textinput. However, we did not find a significant amount of semanticallyrelated gestures in their output. Further investigation should becarried out to indicate if there is a sufficient amount of such gesturesin the dataset for models to be able to learn from. Still, we kepttexts as input as motion quality was not impaired.Compared to our reference models, the output of the proposedmodel seems better, especially in terms of speech audio and gesturealignment. However, we notice that some artifacts are still presentin the motions. Motions occasionally converge to an unusual orodd-looking pose, absurd rotations still take place, and jittering issometimes noticeable.6 RESULTS AND DISCUSSIONThe results of the shared evaluations of the GENEA Challenge 2023indicated that our model (condition SH) is competitive with mostconditions in terms of human-likeness but obtained relatively poorresults for appropriateness to speech [17].Figure 4 presents human-likeness ratings. Subjects participantsgave their ratings based on how human-like the motions appeared,from 0 (worst) to 100 (best). Real motion data (NA) achieved amedian rating of 71, the baselines 46 (BD) and 43 (BM), while ourcondition scored 46. We believe that the module that contributed themost to the human-likeness of generated gestures is the attentionmechanism. As Yang et al. [ 32] showed in their ablation studies,the cross-local attention module played a significant role in termsof human-likeness ratings.Two evaluations were performed to assess gesture appropri-ateness to speech: appropriateness for agent speech and for theinterlocutor speech. The first contains mainly moments where themain agent is the active speaker, while the roles are reversed in thelatter.Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 4: Shared human-likeness rating study. Red bars aremedian ratings; yellow diamonds are mean ratings. Entriesto the challenge are labeled SA-SL (ours is SH), BD and BMare the baselines [ 5], and NA is real data. Extracted fromKucherenko et al. [17].NA SG SJBM SFSK SESDBD SISLSBSASHSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 5: Shared appropriateness for agent speech responses.Entries to the challenge are labeled SA-SL (ours is SH), BDand BM are the baselines [ 5], and NA is real data. Extractedfrom Kucherenko et al. [17].For the appropriateness of agent speech evaluation, subjectswere presented with speech audio and two motions generated bythe model. One motion is the output generated with the speechaudio presented as input, and the other is the output from anothersegment of speech audio. For our condition, subjects preferred thematching motion 52.9% of the time, slightly above chance. Althoughone of the lowest mean appropriateness scores, there is no stati-cally significant differences in the scores of ours and another tenconditions (conditions BM to SA, in Figure 5).Our condition had the lowest score in the appropriateness forthe interlocutor evaluation. This means that subjects found themismatched stimuli more appropriate. However, our model does notuse any interlocutor information as input. Thus, from the model’sICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 6: Shared appropriateness for the interlocutor re-sponses. Entries to the challenge are labeled SA-SL (ours isSH), BD and BM are the baselines, and NA is real data. Ex-tracted from Kucherenko et al. [17].perspective, the output of both matched and mismatched stimulifor this evaluation was generated using the same inputs. Evidently,it is not expected that the outputs be exactly the same due to theprobabilistic nature of the model. But both outputs are expectedto be equivalent in terms of human-likeness and appropriateness,thus scoring similarly to chance (50%).We also noticed from all three evaluations that our model had awide range of scores. For instance, whiskers from the box plot visu-alization of Figure 4 span almost the entire y-axis; our condition,along with condition SK, had the highest confidence intervals ofmedian and mean ratings. In the appropriateness for agent speech,our condition had the third highest number of clear preferences formatched stimuli, the highest for mismatched, and the second lowestfor no preferences when compared to other entries to the challenge.Thus, we argue that the proposed model is indeed capable of gen-erating gestures that are competitive in terms of human-likenessand appropriateness for the main agent. However, the artifactsmentioned in the previous section hinder gesture perception andshould be addressed before any conclusion regarding the proposedarchitecture and individual modules.7 CONCLUSIONThis paper describes the proposed diffusion-based model for ges-ture generation that uses pre-trained VAD. Incorporating speechactivity information in such models could improve responsivenessduring rapid back-and-forth interactions. Also, a VAD can explicitlyprovide this information without needing human-annotated tran-scripts, thus potentially suited for real-time dialogue. Our model hasbeen compared with others in the GENEA Challenge 2023, a crowd-sourced evaluation that directly compares different methods whilecontrolling factors such as data and evaluation methodology. Theevaluation showed that our model is compatible with other entriesto the challenge in terms of human-likeness, but appropriatenessto speech is still unimpressive despite our efforts.Our experiments revealed mixed results regarding the effective-ness of the proposed implementation improvements to the gesturegeneration system. While convergences to undesired poses, extremejoint rotations, and jittering were not frequent, they nonetheless oc-curred. Besides, output motion was unstable, i.e., when generatingmotions given the same inputs, the resulting motion quality variedgreatly. These issues may have contributed to subpar performancein evaluations and compromised the responsiveness of generatedgestures to speaking moments. Although our adaptations hold po-tential value for gesture generation tasks, further improvements areneeded to leverage their benefits fully. Especially the explicit useof speech activity information that could be leveraged to addressturn-taking momentsWe intend to focus primarily on improving speech and gesturealignment for future work. An interesting approach is adapting anexternal framework for alignment as the one proposed by Badlaniet al. [ 3]. Another obvious path is to incorporate data from theinterlocutor to capture the aspects of dyadic scenarios.ACKNOWLEDGMENTSThis study was partially funded by the Coordenação de Aperfeiçoa-mento de Pessoal de Nivel Superior – Brasil (CAPES) – FinanceCode 001. The first author is grateful to the Eldorado ResearchInstitute.
1NQlDR4LS7y
This paper clearly describes the details and the clear reason of using VAD to improve speech-gesture synchronization.
6: Marginally above acceptance threshold
The paper is clear and concise. It uses VAD as the additional input to improve synchronization. As there remains a gap between natural and synthetic gestures, the study of features is important for improving speech appropriateness. The weakness of this paper, however, is that dyadic settings are not described. The followings are some suggestions: 1. It would be great if the authors can have a figure like Fig. 3, but showing the velocity and voice activity along time for a gesture. It would strongly justify the selection of VAD features. It's interesting to see to what extent voice activity correlates with the velocity strengths of an arm joint. 2. As far as I know, the baselines use energy and timed text transcripts for fine-grained information. Energy or prosody features seem to provide richer information than binary voice activity detections. Adding comparisons of these features would be very helpful. 3. Consider adding discussions of the voice activity of the interlocutor. How could it affect the main agent's gesture? 4. Please cite the baseline, "The ivi lab entry to the genea challenge 2022".
4: The reviewer is confident but not absolutely certain that the evaluation is correct
bBrebR1YpXe
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The UEA Digital Humans entry to the GENEA Challenge 2023
["Jonathan Windle", "Iain Matthews", "Ben Milner", "Sarah Taylor"]
This paper describes our entry to the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge 2023. The challenge aims to further the scientific knowledge of automatic gesture generation using a large-scale, joint subjective evaluation of many systems. This year's challenge focuses on generating gestures in a dyadic setting -- predicting a main-agent's motion from the speech of both the main-agent and an interlocutor. We adapt a Transformer-XL architecture for this task by adding a cross-attention module that integrates the interlocutor's speech with that of the main-agent. Our model is conditioned on speech audio (encoded using PASE+), text (encoded using FastText) and a speaker identity label, and is able to generate smooth and speech appropriate gestures for a given identity. We consider the GENEA Challenge user study results and present a discussion of our model strengths and where improvements can be made.
["uea digital humans", "genea challenge", "challenge", "speech", "interlocutor", "entry", "genea", "generation", "evaluation", "behaviour"]
ABSTRACTThis paper describes our entry to the GENEA (Generation and Eval-uation of Non-verbal Behaviour for Embodied Agents) Challenge2023. This year’s challenge focuses on generating gestures in adyadic setting – predicting a main-agent’s motion from the speechof both the main-agent and an interlocutor. We adapt a Transformer-XL architecture for this task by adding a cross-attention modulethat integrates the interlocutor’s speech with that of the main-agent. Our model is conditioned on speech audio (encoded usingPASE+), text (encoded using FastText) and a speaker identity label,and is able to generate smooth and speech appropriate gesturesfor a given identity. We consider the GENEA Challenge user studyresults and present a discussion of our model strengths and whereimprovements can be made.CCS CONCEPTS•Computing methodologies →Artificial intelligence ;Ani-mation .KEYWORDSSpeech-to-gesture, 3D pose prediction, gesture generation, Transformer-XL, Self-Attention, Cross-AttentionACM Reference Format:Jonathan Windle, Iain Matthews, Ben Milner, and Sarah Taylor. 2023. TheUEA Digital Humans entry to the GENEA Challenge 2023. In INTERNA-TIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23), Oc-tober 9–13, 2023, Paris, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3577190.36161161 INTRODUCTIONCo-speech gesturing contributes to language production and per-ception during conversation. Gestures can aid conversation turn-taking and listener feedback while also providing semantic contextand may be indicative of emotion and emphasis [ 4,9,16,22]. Speech-driven gesture generation has predominantly focused on estimatingmotion for monadic speech input of a main-agent, with no knowl-edge of interlocutor speech and no concept of interaction. Instead,Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).ICMI ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s).ACM ISBN 979-8-4007-0055-2/23/10.https://doi.org/10.1145/3577190.3616116this year’s GENEA challenge focuses on generating gestures in adyadic setting – predicting a main-agent’s motion from the speechof both the main-agent itself and also the speech of the interlocutor.We introduce a system to the GENEA Challenge 2023 that usesPASE+ [ 21] speech embeddings in conjunction with FastText [ 2]word embeddings and a speaker identity label as input to an adaptedTransformer-XL [ 3] architecture to generate smooth, contextuallyand temporally coherent motion that can adapt to varying lengthsof historic context. Specifically, we extend the Transformer-XLmodel to provide cross-attention with the interlocutor’s speech toimpart knowledge of both speakers into the prediction.Video examples and code can be found in the supplement atgithub.com/JonathanPWindle/uea-dh-genea23.2 BACKGROUND & PRIOR WORKMany speech-to-motion deep learning techniques are built uponrecurrent models, such as bi-directional Long Short-Term Memorymodels (LSTMs) [ 5,7,23]. Transformer architectures are growingtraction in favour of LSTM models in sequence-based AI, withsequence-based motion prediction models already making use ofthem [ 1,10,15,24]. Transformer models do not have a concept oftemporal position but can effectively model temporal informationoften using a sinusoidal position embedding which is added to theinput.Transformers rely on attention mechanisms which inform thenetwork which parts of data to focus on [ 25]. In self-attention, themechanism is applied to the input sequence to find which elementswithin the same sequence may relate to each other and which arekey to focus on. Conversely, cross-attention is computed for oneinput source in relation to a separate input source, calculating whichelements from one sequence may relate and be important to focuson in another sequence.To perform sequence-to-sequence generation using a vanillatransformer as defined in Vaswani et al. [ 25] a sequence is processedover a sliding window with a one-frame stride. For each windowof input, one frame of output is generated. This is computationallyexpensive and window size is limited by the longest input sequenceseen during training. As the sequence length increases, the size ofthe self-attention mechanism also grows exponentially, leading tomemory and computational limitations.The Transformer-XL architecture [ 3] differs from the traditionaltransformer architecture in two key ways: 1) Attention is calcu-lated conditioned on the previous context, and 2) the positionalencoding uses a learned relative embedding. The Transformer-XLarchitecture allows for extended attention beyond a fixed lengthICMI ’23, October 9–13, 2023, Paris, France Windle, et al.by using segment-level recurrence with state reuse allowing thealteration of context length. The Transformer-XL can therefore betrained efficiently on small segment lengths while retaining histori-cal influence through the state reuse. As the historic context lengthcan vary, the Transformer-XL introduces a learned, relative posi-tional encoding scheme. Due to its improved ability for modellingsequences, we adapt the Transformer-XL architecture for dyadicgesture generation.3 DATA & PREPROCESSINGOur model makes use of the GENEA challenge data [ 11] derivedfrom the Talking With Hands dataset [ 12]. This data includes dyadicconversations between a main-agent and interlocutor and consistsof high-quality 30fps mocap data in Biovision Hierarchical (BVH)format, with corresponding speech audio and text transcripts. Ourtask is to generate the main-agent motion conditioned on bothmain-agent and interlocutor speech. We process both main-agentand interlocutor speech data the same, using all available modalities;motion, speech, transcription and speaker identity.3.1 MotionEuler angles are required for test submission and are a convenientrepresentation supported by many available 3D animation pipelines.Despite this, Euler angles are discontinuous and difficult for neuralnetworks to learn [ 28]. We convert rotations to the 6D rotationrepresentation presented by Zhou et al. [ 28] for their suitability todeep learning tasks. Global skeleton position is also encoded usingthreex,y,z values. All values are standardised by subtracting themean and dividing by the variance computed from the trainingdata.Each identity in the dataset has a skeleton with different bonelengths. Additionally, per-frame joint offsets are also present in thedata, possibly to account for bone-stretching in the data capture.Our analysis of these joint offset values revealed very low variance,and setting them to a pre-defined fixed value for all frames did notimpact visual performance. We therefore compute one set of bonelengths and offsets per speaker to simplify the training pipeline. Werandomly select a sample corresponding to each identity and fix thebone lengths and offsets accordingly using the first data frame. Jointpositions can then be computed using the joint angles (measuredor predicted) and pre-defined speaker-specific bone measurements.3.2 Speech3.2.1 Audio. We extract audio features using the problem-agnosticspeech encoder (PASE+) [ 21]. PASE+ is a feature embedding learnedusing a multi-task learning approach to solve 12 regression tasksaimed at encoding important speech characteristics. These 12 tasksinclude estimating MFCCs, FBANKs and other speech-related in-formation including prosody and speech content.PASE+ requires audio to be sampled at 16KHz, so we used band-sinc filtering to reduce the audio sample rate from 42KHz to 16KHz.We use the released, pre-trained PASE+ model to extract audiofeature embeddings of size 768 that represents a 33ms window ofaudio to align with the 30 fps motion. The weights for this modelare not updated during training.3.2.2 Text. We extract features from the text transcriptions usingthe FastText word embedding described by Bojanowski et al. [ 2]using the pre-trained model released by Mikolov et al. [ 17]. Foreach spoken word, we extract the word embedding and align theembedding values to each 33ms window of motion. If no word isspoken at a given frame then a vector of zero values is passed. Whena word is spoken across multiple frames, the vector is repeated forthe appropriate number of frames.4 METHODWe adapt the Transformer-XL [ 3] architecture for speech-drivengesture generation. Specifically, we modify this architecture to useboth self and cross-attention. The advantage of the Transformer-XLarchitecture is that it allows us to model the longer term relationshipbetween speech and gesture for input of any duration.Our feature extraction process, shown in Figure 1, is used togenerate a feature vector Xof lengthwfor both the main-agentand interlocutor. These features are then passed to our model asshown in our overview Figure 2 where they are processed using anumber os Self-Attention Blocks andCross-Attention Blocks .FastT ext"Hello"PASE+SpeakerEmbeddingSpeaker LabelLinearFigure 1: Outline of our data processing pipeline. Our processtakes as input, wframes starting at frame tof speech audio,text transcript and a speaker identity label to generate afeature vector X. We use pre-trained models for the audioand text inputs. Red box defines frozen weights.4.1 Feature ExtractionWe segment the input into non-overlapping segments of length wframes. For each segment, an input feature vector Xis generatedand used to predict Y, a sequence of poses of length w. Our modelis called for each w-frame feature vector X. In a speech sequenceof lengthT, it is therefore called ⌈Tw⌉times.For each segment, we extract audio (PASE+) features at:t+w, andtext (FastText) features ft:t+was described in Section 3.2, where trepresents the start frame of a window w. For each utterance, thereis also a speaker label provided. This is a unique ID which we passto a learned embedding layer. The embedding layer acts as a lookupThe UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceSelf-Attention BlockRepeat timesTransformer-XL Attention BlockFeed Forward BlockQ K VLinear QKV NetSkipQLinear Q NetV KLinear KV NetTransformer-XL Attention BlockFeed Forward Block Relative Encoding NetRelative Encoding NetLinearCross-Attention BlockRepeat timesKeyMain-SpeakerInterlocutorSelf-AttentionCross-AttentionSinusoidal PositionEmbeddingInput on first layerInput on subsequentlayers Figure 2: Outline of our prediction model which takes as input, wmotion frames worth of encoded conditioning informationstarting at time tand predicts wframes of body motion. We show a self-attention block and cross-attention block, where weextractQ,K,V vectors using main-agent or interlocutor speech according to the attention type conditioned on previous mnumber of hidden states M. These vectors are passed to the Transformer-XL attention block to calculate attention before beingfed into a feed-forward block. A final linear layer predicts wposes ˆyt:t+w.ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.table for learned feature embeddings that are representative of eachspeaker style. The trainable weights ensure that two speakers withsimilar gesture styles are close in the latent embedding space, andconversely, those with different gesturing styles are far apart.Each modality is extracted and concatenated into a single featurevector Xas shown in Figure 1. Feature vectors for both the main-agent and the interlocutor are extracted in the same way using thesame learned weights. This is because a speaker may appear as themain-agent in some sequences and the interlocutor in others.4.2 Self-AttentionAs shown in Figure 2, we process the features from the main-agent using a self-attention block. The attention score is defined inVaswani et al. [25] as:Attention(Q,K,V)=softmax(QKT√︁dk)VWhere Query Q, KeyK, and Value Vare all vectors and queriesand keys are of dimension dk, and values of dimension dv. Thesevectors are often linear projections of an input vector into theirrespective dimensions d.When calculating attention scores in the Transformer-XL model,historic context is included using segment-level recurrence withstate reuse. This is achieved by caching previous hidden state se-quences which can be used when processing future segments. Whenno historic context is present at the start of the speech sequence,our Transformer-XL extracts Q,K andVvectors from the main-agent inputs alone. The historic context from processed segmentsMof lengthmis cached as each segment is processed. Q,K andVvectors are then extracted from the subsequent inputs, conditionedon previous context. This process is completed using a Linear QKVNet shown in Figure 2 which is a single linear layer.Transformer models do not have inherent knowledge of posi-tional order. To ensure temporal coherency, a positional encodingis often added to the input vectors to inject some position contextto the model. As the Transformer-XL architecture can have varyinglengths of historic context and is not constrained to a maximumlength, a learned relative position encoding ris instead utilised.The learned relative encoding is from a single linear layer and takesa sinusoidal position embedding for the full length of context, thatis the sum of both memory length available and the query length.Rather than injecting the temporal information to the input beforecalculating Q,KandV, which is the approach used in Vaswaniet al. [ 25], the Transformer-XL inputs this information after thesevectors have been extracted at the time of calculating the attentionscore.UsingQ,KandVin conjunction with the relative position en-codingr, we use the Transformer-XL attention block to calculateattention vectors. As Figure 2 shows, these attention vectors arethen passed to a Feed Forward Block which comprises of two Lin-ear layers, with a ReLU activation on the first output and dropoutapplied to both.Each self-attention block has multiple attention heads, each aim-ing to extract different attention features and a self-attention blockis repeatedNselftimes, with each layer feeding its output to the next.Memory values Mare persisted on a per-layer basis and thereforehidden states are specific to each self-attention block. The lengthof this memory mcan be altered during training and evaluation.4.3 Cross-AttentionWhile it is reasonable to assume the main-agent speech is drivingthe majority of the gestures, the interlocutor can also influencethe motion of the agent indicating turn taking and backchannelcommunication. For example, the main-agent might nod to showagreement or understanding when the interlocutor is speaking.Therefore we aim to derive the main source of information drivingthe motion from the main-agent’s speech, but also include the inter-locutor’s speech. We adapt the Transformer-XL to not only computeself-attention over the main-agent inputs, but to also utilise cross-attention from the interlocutor while maintaining segment-levelrecurrence and relative position encoding. This cross-attentionblock is shown in Figure 2.Cross-attention is an attention mechanism where the Query Qis extracted from the input source and the Key Kand ValueVareextracted from an external input element. Our cross-attention blockuses a similar approach as the self-attention block defined in Section4.2, but instead has two separate networks to process the inputs; oneto extractQfrom the main-agent self-attention encoding and one toextractKandVderived from the interlocutor speech. For each layerof cross-attention blocks, the input to the Qnet is a skip connectionfrom the output of the self-attention encoder and therefore remainsthe same input for all cross-attention blocks . The input to the KVnetin the first iteration is the interlocutor feature vectors (described inSection 4.1), and the output from a cross-attention block thereafter.The output from the cross-attention block is then passed to asingle linear layer which predicts Y, the standardised 6D rotationsof each joint and the global position of the skeleton.4.4 Training ProcedureFor each segment of speech of length w, we predict the pose rep-resented by a vector of joint rotations ˆYof lengthw. In motionsynthesis it is common to include both geometric and temporalconstraints in the loss function to ensure that the model gener-ates output that is both geometrically and dynamically plausible[6,24,26]. Our loss function Lccomprises multiple terms includingaL1loss on the rotations ( Lr), positions ( Lp), velocity (Lv), acceler-ation (La) and kinetic energy ( Lv2) of each joint. If we take yrandˆyrto be natural mocap and predicted 6D rotations respectively; ypandˆypto to be positions in world space computed using forwardkinematics given the predicted joint angles and the pre-definedspeaker-specific bone lengths, we use the following loss function:Lr=L1(yr,ˆyr)Lp=L1(yp,ˆyp)Lv=L1(f′(yp),f′(ˆyp))Lv2=L1(f′(yp)2,f′(ˆyp)2)La=L1(f′′(yp),f′′(ˆyp))Lc=λpLp+λvLv+λaLa+λrLr+λv2Lv2(1)The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceWheref′andf′′are the first and second derivatives respectively.Each term has a λweighting to control the importance of each termin the loss.Table 1 summarises the parameters used, optimised using a ran-dom grid search parameter sweep. These settings were chosen usinga combination of low validation loss values and quality of the pre-dicted validation sequences as observed by our team. We train ourmodel for 1770 epochs using the AdamW [ 14] optimiser and foundthat a segment length wof 90 frames and memory length mof 180frames was optimal. The Feed Forward Blocks used in both self andcross-attention layers are comprised using the same topology andsize.Hyperparameter ValueTransformerXL Head Dimension 32Number Heads 32Self-Attention Layers ( Nself) 4Cross-Attention Layers ( Ncross)2Feed Forward Block Dropout 0.2Hidden Size 4096Embeddings Feature Embedding 1024Speaker Embedding 8Training Batch Size 32Learning Rate 0.00001λr 1λp 0.01λv,λa 0.5λv2 0.2Context Segment Length ( w) 90 framesMemory Length ( m) 180 framesTable 1: Training hyperparameters.5 RESULTSOur approach is evaluated in conjunction with the GENEA Chal-lenge 2023 [ 11]. Each challenge participant submitted 70 BVH filesfor main-agent motion generated using the speech of the main-agent and interlocutor for each interaction. Using these submittedBVH files, motion is rendered on the same character for comparison.There are three studies of interest in this challenge; human likeness,appropriateness to speech and appropriate to interlocutor. Eachchallenge participant is assigned a unique ID to provide anonymityduring the evaluation process, our ID which will be used in Figuresand Tables throughout is SJ.NAdenotes natural motion of themocap sequences, BDandBMare baseline systems in a dyadicand monadic setting respectively. We give a brief overview of eachevaluation method, however, we strongly recommend also readingthe main challenge paper [11] for full details.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9Table 2: Summary statistics of user-study ratings from thehuman-likeness study, with confidence intervals at the levelα= 0.05. Conditions are ordered by decreasing sample medianrating. Our model results are highlighted in pink . Table andcaption from [11]....over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between condi-tions in human-likeness study. White means that the condi-tion listed on the y-axis rated significantly above the condi-tion on the x-axis, black means the opposite ( yrated below x),and grey means no statistically significant difference at thelevelα= 0.05 after Holm-Bonferroni correction. Conditionsare listed in the same order as in Table 2. Figure and captionfrom [11].ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.5.1 Human LikenessThis user-study aims to evaluate how human-like the motion gen-erated is, independent of the speech. Although each comparisonsystem motion corresponds to the same input speech and condi-tioning, these sequences were muted to ensure ratings can onlydepend on the motion seen in the videos. 8 systems were comparedat any one time and participants were asked “Please indicate on asliding scale how human-like the gesture motion appears”. Studyparticipants gave their ratings in response to this question on ascale from 0 (worst) to 100 (best).Summary statistics (median, mean) are shown in Table 2 andsignificance comparisons are provided in Figure 3. Our system(SJ) was evaluated to be the third highest ranking of submittedsystems with regards to mean and median human likeness score.Figure 3 shows only NA,SGandSFare significantly better thanour system. Our system scores significantly higher than 9 othersystems, including both baseline systems.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803Table 3: Summary statistics of user-study responses from theappropriateness to speech study, with confidence intervalsfor the mean appropriateness score (MAS) at the level α= 0.05.“Pref. matched” identifies how often test-takers preferredmatched motion in terms of appropriateness, ignoring ties.Our model results are highlighted in pink . Table and cap-tion from [11].5.2 Speech AppropriatenessTo measure appropriateness of gestures to speech, participantswere asked to view two videos and answer “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?”. Both video stimuli are fromthe same condition and thus ensure the same motion quality, butone matches the speech and the other is mismatched, generatedfrom an unrelated speech sequence. Five response options wereavailable, namely “Left is clearly better”, “Left is slightly better”,“They are equal”, “Right is slightly better”, and “Right is clearlybetter”. Each answer is assigned a value of -2, -1, 0, 1, 2 where anegative value is given for a preference to mismatched motion anda positive value for a preference to matched motion.Table 3 provides summary statistics and win rates, Figure 4visualises the response distribution and Figure 5 shows significancecomparisons. Our approach ( SJ) ranked second in the submittedsystems. Figure 5 shows that there are few significant differencesbetween pairwise systems. Only SGand the natural mocap ( NA)rank significantly better than our system. Again, our system rankssignificantly better than 9 other conditions including the dyadicbaseline system.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualising the response distribution inthe appropriateness to speech study. The blue bar (bottom)represents responses where subjects preferred the matchedmotion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score. Figure and caption from [11].5.3 Interlocutor AppropriatenessAs this year’s challenge includes awareness of the interlocutorspeech and motion, the appropriateness of the generated main-agent motion to the interlocutor’s speech is also evaluated. Thewas done using a similar technique used for measuring speech ap-propriateness but differed in several important aspects. The test datacontained pairs of interactions, one with matched main-agent andinterlocutor interactions and another with the same main-agentspeech, but mismatched interlocutor speech. Preference can bequantified for generated motion with matched over mismatched in-terlocutor behaviour and we can assess how interlocutor behaviouraffects the motion.Our system ranked 8th in this study but only natural mocap, SA,BDandSLare rated significantly higher than it. There is no othersignificant difference to any other system, except SHwhere we weresignificantly better. We observe from the statistics in Figure 7 thatour system had the lowest number of negative scores (preferencefor the mismatched dyadic interaction), and a large number of nopreference scores.The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Significance of pairwise differences between con-ditions in the appropriateness to speech evaluation. Whitemeans that the condition listed on the y-axis rated signifi-cantly above the condition on the x-axis, black means theopposite (yrated below x), and grey means no statistically sig-nificant difference at the level α= 0.05 after Holm-Bonferronicorrection. Conditions are listed in the same order as in Table3. Figure and caption from [11].Cond-MASPref. Raw response countition matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 4: Summary statistics of user-study responses from theappropriateness to interlocutor study, with confidence inter-vals for the mean appropriateness score (MAS) at the level α= 0.05. “Pref. matched” identifies how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties. Our model results are highlighted in pink . Table andcaption from [11].NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 6: Significance of pairwise differences between con-ditions in the appropriateness to interlocutor study. Whitemeans that the condition listed on the y-axis rated signifi-cantly above the condition on the x-axis, black means theopposite (yrated below x), and grey means no statistically sig-nificant difference at the level α= 0.05 after Holm-Bonferronicorrection. Conditions are listed in the same order as in Fig-ure 4. Figure and caption from [11].NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 7: Bar plots visualising the response distribution in theappropriateness to interlocutor study. The blue bar (bottom)represents responses where subjects preferred the matchedmotion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score. Figure and caption from [11].ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.5.4 ObservationsWe observe that the animation generated from our model is smoothand temporally coherent without jitter or sudden shifts in motionwhile maintaining gesture beats in time with speech. Our modelappears to reliably and realistically animate beat gestures. Beatgestures are simple and fast movements of the hands and havea close relationship to prosodic activity such as acoustic energyand pitch [ 20,27]. The PASE+ model used for encoding audio inour system was trained to estimate prosodic features as one of itsdownstream tasks, making the derived audio features particularlysuitable for animating beat gestures.We do not expect gestures to occur during every audio beat,but when they happen they should synchronise with the speech.Using the method of motion and audio beat extraction used in thebeat align score calculation presented in Liu et al. [ 13], we canvisualise the onset of audio beats and motion gesture over time.Figure 8 shows two well timed gestures for a 3 second audio clip.The utterance of “programs” shows a beat gesture where duringthe syllable utterance “pro”, the speaker moves their right handfrom right to left and as the stressed syllable “grams” is spoken,the hand begins to change velocity and move from left to right. Wealso see an example of muted speech where our model continues toperform well. As there is no speech, there is little to inform gesture,we find the right arm drops to the side, and left arm lowers slightly.However, as the speech begins again, both arms raise in time withthe speech.A difference between natural mocap motion and our generatedanimation is that the latter does not exhibit sporadic, non-speechrelated motion such as self-adaptor traits. Self-adaptors are move-ments that typically include self-touch, such as scratching of theneck, clasping at an elbow, adjusting hair or interlocking fingers[18]. Despite the indirect relationship between these behavioursand speech, these traits are linked to perceived emotional stabilityof an agent [18] and may influence perceived human-likeness.6 DISCUSSIONOur approach performed well with regards to human-likeness andappropriateness to speech. Our model performed comparably to10 of the other systems with regards to appropriateness to the in-terlocutor’s speech, but clearly it can be improved in this area. Weobserve in Figure 7 and Table 4 that, for our system, participantspreferred the mismatched stimuli least compared to all other sys-tems (including natural mocap). The majority of responses weretied, meaning that they considered the mismatched stimuli to be ofequal appropriateness as the matched animation. It is unclear wherethis uncertainty stems from and more work is required to evaluatethis cause. There may be a lack of influence from the interlocutorspeech in this model architecture. There are many ways to incorpo-rate the interlocutor speech in this model, for example including asan extra input to the self-attention rather than as cross-attentionor altering skip connections. These ideas or simply increasing thenumber of cross-attention layers may improve the performance ofthe appropriateness to the interlocutor.More experiments are also required to determine the impactof including the interlocutor information on human-likeness andappropriateness to speech as well as appropriateness to interlocutor.P r og r a m s<mute> medicalFigure 8: Generated gestures for given audio beats. Using a3s audio clip from the test dataset we show the audio spec-trogram, as well as aligned audio beat onsets and their cor-responding onset strengths as well as motion gesture onsetdetection of the right wrist using the method of beat detec-tion defined in Liu et al. [ 13]. We can see during the syllableutterance “pro”, the speaker moves their right hand handfrom right to left and as the stressed syllable “grams” is spo-ken, the hand begins to move left to right. When there issilence, the arms begin to rest and again gesture in the nextutterance.This may have a positive effect on these two evaluations or maylimit performance in these areas.Although our proposed method is deterministic, i.e. the sameinputs will always produce the same outputs, it could be possible toincorporate this design into a probabilistic model. For example, thisapproach could be adjusted to incorporate probabilistic diffusion[8, 19] methods.7 CONCLUSIONWe have presented our submission to the GENEA Challenge 2023,a modified Transformer-XL based approach that utilises both self-attention and cross-attention. Our solution generates smooth, tem-porally coherent animation from the conversational speech of amain-agent and interlocutor. Subjective evaluation results supportthat our system performs well in regards to human-likeness andappropriateness, ranking third and second respectively when com-pared to the 14 other systems and baselines and performing signifi-cantly better than 9 in both evaluations. Our approach continues tobe competitive when evaluating the generated main-agent motion’sappropriateness to the interlocutor, where only the natural mocapand 3 systems performed significantly better.The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, France
KNk7Jb7LcN
Simple yet promising approach for gesture generation
8: Top 50% of accepted papers, clear accept
This paper proposes a transformer-based model for gesture generation. The authors improve a Transformer-XL architecture by adding a cross-attention module to incorporate the interlocutor’s information. The technical descriptions are concise and easy to understand. The proposed model is not technically complex but performs well. The experimental results show that the generated gestures are preferably evaluated in terms of both human-likeness and appropriateness. Comments and questions: - The motivation for introducing the cross-attention is clearly explained. But it could be more informational if an ablation study of the proposed component. - In the abstract, it would be better not to spend half of it describing the GENEA challenge itself. Instead, spending much more on describing the original part of the entry would be better. - To improve the readability, I recommend placing the figures/tables above/below the main texts, not between the texts (Figure 1 is placed between the section title and the main text).
4: The reviewer is confident but not absolutely certain that the evaluation is correct
bBrebR1YpXe
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The UEA Digital Humans entry to the GENEA Challenge 2023
["Jonathan Windle", "Iain Matthews", "Ben Milner", "Sarah Taylor"]
This paper describes our entry to the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge 2023. The challenge aims to further the scientific knowledge of automatic gesture generation using a large-scale, joint subjective evaluation of many systems. This year's challenge focuses on generating gestures in a dyadic setting -- predicting a main-agent's motion from the speech of both the main-agent and an interlocutor. We adapt a Transformer-XL architecture for this task by adding a cross-attention module that integrates the interlocutor's speech with that of the main-agent. Our model is conditioned on speech audio (encoded using PASE+), text (encoded using FastText) and a speaker identity label, and is able to generate smooth and speech appropriate gestures for a given identity. We consider the GENEA Challenge user study results and present a discussion of our model strengths and where improvements can be made.
["uea digital humans", "genea challenge", "challenge", "speech", "interlocutor", "entry", "genea", "generation", "evaluation", "behaviour"]
ABSTRACTThis paper describes our entry to the GENEA (Generation and Eval-uation of Non-verbal Behaviour for Embodied Agents) Challenge2023. This year’s challenge focuses on generating gestures in adyadic setting – predicting a main-agent’s motion from the speechof both the main-agent and an interlocutor. We adapt a Transformer-XL architecture for this task by adding a cross-attention modulethat integrates the interlocutor’s speech with that of the main-agent. Our model is conditioned on speech audio (encoded usingPASE+), text (encoded using FastText) and a speaker identity label,and is able to generate smooth and speech appropriate gesturesfor a given identity. We consider the GENEA Challenge user studyresults and present a discussion of our model strengths and whereimprovements can be made.CCS CONCEPTS•Computing methodologies →Artificial intelligence ;Ani-mation .KEYWORDSSpeech-to-gesture, 3D pose prediction, gesture generation, Transformer-XL, Self-Attention, Cross-AttentionACM Reference Format:Jonathan Windle, Iain Matthews, Ben Milner, and Sarah Taylor. 2023. TheUEA Digital Humans entry to the GENEA Challenge 2023. In INTERNA-TIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23), Oc-tober 9–13, 2023, Paris, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3577190.36161161 INTRODUCTIONCo-speech gesturing contributes to language production and per-ception during conversation. Gestures can aid conversation turn-taking and listener feedback while also providing semantic contextand may be indicative of emotion and emphasis [ 4,9,16,22]. Speech-driven gesture generation has predominantly focused on estimatingmotion for monadic speech input of a main-agent, with no knowl-edge of interlocutor speech and no concept of interaction. Instead,Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).ICMI ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s).ACM ISBN 979-8-4007-0055-2/23/10.https://doi.org/10.1145/3577190.3616116this year’s GENEA challenge focuses on generating gestures in adyadic setting – predicting a main-agent’s motion from the speechof both the main-agent itself and also the speech of the interlocutor.We introduce a system to the GENEA Challenge 2023 that usesPASE+ [ 21] speech embeddings in conjunction with FastText [ 2]word embeddings and a speaker identity label as input to an adaptedTransformer-XL [ 3] architecture to generate smooth, contextuallyand temporally coherent motion that can adapt to varying lengthsof historic context. Specifically, we extend the Transformer-XLmodel to provide cross-attention with the interlocutor’s speech toimpart knowledge of both speakers into the prediction.Video examples and code can be found in the supplement atgithub.com/JonathanPWindle/uea-dh-genea23.2 BACKGROUND & PRIOR WORKMany speech-to-motion deep learning techniques are built uponrecurrent models, such as bi-directional Long Short-Term Memorymodels (LSTMs) [ 5,7,23]. Transformer architectures are growingtraction in favour of LSTM models in sequence-based AI, withsequence-based motion prediction models already making use ofthem [ 1,10,15,24]. Transformer models do not have a concept oftemporal position but can effectively model temporal informationoften using a sinusoidal position embedding which is added to theinput.Transformers rely on attention mechanisms which inform thenetwork which parts of data to focus on [ 25]. In self-attention, themechanism is applied to the input sequence to find which elementswithin the same sequence may relate to each other and which arekey to focus on. Conversely, cross-attention is computed for oneinput source in relation to a separate input source, calculating whichelements from one sequence may relate and be important to focuson in another sequence.To perform sequence-to-sequence generation using a vanillatransformer as defined in Vaswani et al. [ 25] a sequence is processedover a sliding window with a one-frame stride. For each windowof input, one frame of output is generated. This is computationallyexpensive and window size is limited by the longest input sequenceseen during training. As the sequence length increases, the size ofthe self-attention mechanism also grows exponentially, leading tomemory and computational limitations.The Transformer-XL architecture [ 3] differs from the traditionaltransformer architecture in two key ways: 1) Attention is calcu-lated conditioned on the previous context, and 2) the positionalencoding uses a learned relative embedding. The Transformer-XLarchitecture allows for extended attention beyond a fixed lengthICMI ’23, October 9–13, 2023, Paris, France Windle, et al.by using segment-level recurrence with state reuse allowing thealteration of context length. The Transformer-XL can therefore betrained efficiently on small segment lengths while retaining histori-cal influence through the state reuse. As the historic context lengthcan vary, the Transformer-XL introduces a learned, relative posi-tional encoding scheme. Due to its improved ability for modellingsequences, we adapt the Transformer-XL architecture for dyadicgesture generation.3 DATA & PREPROCESSINGOur model makes use of the GENEA challenge data [ 11] derivedfrom the Talking With Hands dataset [ 12]. This data includes dyadicconversations between a main-agent and interlocutor and consistsof high-quality 30fps mocap data in Biovision Hierarchical (BVH)format, with corresponding speech audio and text transcripts. Ourtask is to generate the main-agent motion conditioned on bothmain-agent and interlocutor speech. We process both main-agentand interlocutor speech data the same, using all available modalities;motion, speech, transcription and speaker identity.3.1 MotionEuler angles are required for test submission and are a convenientrepresentation supported by many available 3D animation pipelines.Despite this, Euler angles are discontinuous and difficult for neuralnetworks to learn [ 28]. We convert rotations to the 6D rotationrepresentation presented by Zhou et al. [ 28] for their suitability todeep learning tasks. Global skeleton position is also encoded usingthreex,y,z values. All values are standardised by subtracting themean and dividing by the variance computed from the trainingdata.Each identity in the dataset has a skeleton with different bonelengths. Additionally, per-frame joint offsets are also present in thedata, possibly to account for bone-stretching in the data capture.Our analysis of these joint offset values revealed very low variance,and setting them to a pre-defined fixed value for all frames did notimpact visual performance. We therefore compute one set of bonelengths and offsets per speaker to simplify the training pipeline. Werandomly select a sample corresponding to each identity and fix thebone lengths and offsets accordingly using the first data frame. Jointpositions can then be computed using the joint angles (measuredor predicted) and pre-defined speaker-specific bone measurements.3.2 Speech3.2.1 Audio. We extract audio features using the problem-agnosticspeech encoder (PASE+) [ 21]. PASE+ is a feature embedding learnedusing a multi-task learning approach to solve 12 regression tasksaimed at encoding important speech characteristics. These 12 tasksinclude estimating MFCCs, FBANKs and other speech-related in-formation including prosody and speech content.PASE+ requires audio to be sampled at 16KHz, so we used band-sinc filtering to reduce the audio sample rate from 42KHz to 16KHz.We use the released, pre-trained PASE+ model to extract audiofeature embeddings of size 768 that represents a 33ms window ofaudio to align with the 30 fps motion. The weights for this modelare not updated during training.3.2.2 Text. We extract features from the text transcriptions usingthe FastText word embedding described by Bojanowski et al. [ 2]using the pre-trained model released by Mikolov et al. [ 17]. Foreach spoken word, we extract the word embedding and align theembedding values to each 33ms window of motion. If no word isspoken at a given frame then a vector of zero values is passed. Whena word is spoken across multiple frames, the vector is repeated forthe appropriate number of frames.4 METHODWe adapt the Transformer-XL [ 3] architecture for speech-drivengesture generation. Specifically, we modify this architecture to useboth self and cross-attention. The advantage of the Transformer-XLarchitecture is that it allows us to model the longer term relationshipbetween speech and gesture for input of any duration.Our feature extraction process, shown in Figure 1, is used togenerate a feature vector Xof lengthwfor both the main-agentand interlocutor. These features are then passed to our model asshown in our overview Figure 2 where they are processed using anumber os Self-Attention Blocks andCross-Attention Blocks .FastT ext"Hello"PASE+SpeakerEmbeddingSpeaker LabelLinearFigure 1: Outline of our data processing pipeline. Our processtakes as input, wframes starting at frame tof speech audio,text transcript and a speaker identity label to generate afeature vector X. We use pre-trained models for the audioand text inputs. Red box defines frozen weights.4.1 Feature ExtractionWe segment the input into non-overlapping segments of length wframes. For each segment, an input feature vector Xis generatedand used to predict Y, a sequence of poses of length w. Our modelis called for each w-frame feature vector X. In a speech sequenceof lengthT, it is therefore called ⌈Tw⌉times.For each segment, we extract audio (PASE+) features at:t+w, andtext (FastText) features ft:t+was described in Section 3.2, where trepresents the start frame of a window w. For each utterance, thereis also a speaker label provided. This is a unique ID which we passto a learned embedding layer. The embedding layer acts as a lookupThe UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceSelf-Attention BlockRepeat timesTransformer-XL Attention BlockFeed Forward BlockQ K VLinear QKV NetSkipQLinear Q NetV KLinear KV NetTransformer-XL Attention BlockFeed Forward Block Relative Encoding NetRelative Encoding NetLinearCross-Attention BlockRepeat timesKeyMain-SpeakerInterlocutorSelf-AttentionCross-AttentionSinusoidal PositionEmbeddingInput on first layerInput on subsequentlayers Figure 2: Outline of our prediction model which takes as input, wmotion frames worth of encoded conditioning informationstarting at time tand predicts wframes of body motion. We show a self-attention block and cross-attention block, where weextractQ,K,V vectors using main-agent or interlocutor speech according to the attention type conditioned on previous mnumber of hidden states M. These vectors are passed to the Transformer-XL attention block to calculate attention before beingfed into a feed-forward block. A final linear layer predicts wposes ˆyt:t+w.ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.table for learned feature embeddings that are representative of eachspeaker style. The trainable weights ensure that two speakers withsimilar gesture styles are close in the latent embedding space, andconversely, those with different gesturing styles are far apart.Each modality is extracted and concatenated into a single featurevector Xas shown in Figure 1. Feature vectors for both the main-agent and the interlocutor are extracted in the same way using thesame learned weights. This is because a speaker may appear as themain-agent in some sequences and the interlocutor in others.4.2 Self-AttentionAs shown in Figure 2, we process the features from the main-agent using a self-attention block. The attention score is defined inVaswani et al. [25] as:Attention(Q,K,V)=softmax(QKT√︁dk)VWhere Query Q, KeyK, and Value Vare all vectors and queriesand keys are of dimension dk, and values of dimension dv. Thesevectors are often linear projections of an input vector into theirrespective dimensions d.When calculating attention scores in the Transformer-XL model,historic context is included using segment-level recurrence withstate reuse. This is achieved by caching previous hidden state se-quences which can be used when processing future segments. Whenno historic context is present at the start of the speech sequence,our Transformer-XL extracts Q,K andVvectors from the main-agent inputs alone. The historic context from processed segmentsMof lengthmis cached as each segment is processed. Q,K andVvectors are then extracted from the subsequent inputs, conditionedon previous context. This process is completed using a Linear QKVNet shown in Figure 2 which is a single linear layer.Transformer models do not have inherent knowledge of posi-tional order. To ensure temporal coherency, a positional encodingis often added to the input vectors to inject some position contextto the model. As the Transformer-XL architecture can have varyinglengths of historic context and is not constrained to a maximumlength, a learned relative position encoding ris instead utilised.The learned relative encoding is from a single linear layer and takesa sinusoidal position embedding for the full length of context, thatis the sum of both memory length available and the query length.Rather than injecting the temporal information to the input beforecalculating Q,KandV, which is the approach used in Vaswaniet al. [ 25], the Transformer-XL inputs this information after thesevectors have been extracted at the time of calculating the attentionscore.UsingQ,KandVin conjunction with the relative position en-codingr, we use the Transformer-XL attention block to calculateattention vectors. As Figure 2 shows, these attention vectors arethen passed to a Feed Forward Block which comprises of two Lin-ear layers, with a ReLU activation on the first output and dropoutapplied to both.Each self-attention block has multiple attention heads, each aim-ing to extract different attention features and a self-attention blockis repeatedNselftimes, with each layer feeding its output to the next.Memory values Mare persisted on a per-layer basis and thereforehidden states are specific to each self-attention block. The lengthof this memory mcan be altered during training and evaluation.4.3 Cross-AttentionWhile it is reasonable to assume the main-agent speech is drivingthe majority of the gestures, the interlocutor can also influencethe motion of the agent indicating turn taking and backchannelcommunication. For example, the main-agent might nod to showagreement or understanding when the interlocutor is speaking.Therefore we aim to derive the main source of information drivingthe motion from the main-agent’s speech, but also include the inter-locutor’s speech. We adapt the Transformer-XL to not only computeself-attention over the main-agent inputs, but to also utilise cross-attention from the interlocutor while maintaining segment-levelrecurrence and relative position encoding. This cross-attentionblock is shown in Figure 2.Cross-attention is an attention mechanism where the Query Qis extracted from the input source and the Key Kand ValueVareextracted from an external input element. Our cross-attention blockuses a similar approach as the self-attention block defined in Section4.2, but instead has two separate networks to process the inputs; oneto extractQfrom the main-agent self-attention encoding and one toextractKandVderived from the interlocutor speech. For each layerof cross-attention blocks, the input to the Qnet is a skip connectionfrom the output of the self-attention encoder and therefore remainsthe same input for all cross-attention blocks . The input to the KVnetin the first iteration is the interlocutor feature vectors (described inSection 4.1), and the output from a cross-attention block thereafter.The output from the cross-attention block is then passed to asingle linear layer which predicts Y, the standardised 6D rotationsof each joint and the global position of the skeleton.4.4 Training ProcedureFor each segment of speech of length w, we predict the pose rep-resented by a vector of joint rotations ˆYof lengthw. In motionsynthesis it is common to include both geometric and temporalconstraints in the loss function to ensure that the model gener-ates output that is both geometrically and dynamically plausible[6,24,26]. Our loss function Lccomprises multiple terms includingaL1loss on the rotations ( Lr), positions ( Lp), velocity (Lv), acceler-ation (La) and kinetic energy ( Lv2) of each joint. If we take yrandˆyrto be natural mocap and predicted 6D rotations respectively; ypandˆypto to be positions in world space computed using forwardkinematics given the predicted joint angles and the pre-definedspeaker-specific bone lengths, we use the following loss function:Lr=L1(yr,ˆyr)Lp=L1(yp,ˆyp)Lv=L1(f′(yp),f′(ˆyp))Lv2=L1(f′(yp)2,f′(ˆyp)2)La=L1(f′′(yp),f′′(ˆyp))Lc=λpLp+λvLv+λaLa+λrLr+λv2Lv2(1)The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceWheref′andf′′are the first and second derivatives respectively.Each term has a λweighting to control the importance of each termin the loss.Table 1 summarises the parameters used, optimised using a ran-dom grid search parameter sweep. These settings were chosen usinga combination of low validation loss values and quality of the pre-dicted validation sequences as observed by our team. We train ourmodel for 1770 epochs using the AdamW [ 14] optimiser and foundthat a segment length wof 90 frames and memory length mof 180frames was optimal. The Feed Forward Blocks used in both self andcross-attention layers are comprised using the same topology andsize.Hyperparameter ValueTransformerXL Head Dimension 32Number Heads 32Self-Attention Layers ( Nself) 4Cross-Attention Layers ( Ncross)2Feed Forward Block Dropout 0.2Hidden Size 4096Embeddings Feature Embedding 1024Speaker Embedding 8Training Batch Size 32Learning Rate 0.00001λr 1λp 0.01λv,λa 0.5λv2 0.2Context Segment Length ( w) 90 framesMemory Length ( m) 180 framesTable 1: Training hyperparameters.5 RESULTSOur approach is evaluated in conjunction with the GENEA Chal-lenge 2023 [ 11]. Each challenge participant submitted 70 BVH filesfor main-agent motion generated using the speech of the main-agent and interlocutor for each interaction. Using these submittedBVH files, motion is rendered on the same character for comparison.There are three studies of interest in this challenge; human likeness,appropriateness to speech and appropriate to interlocutor. Eachchallenge participant is assigned a unique ID to provide anonymityduring the evaluation process, our ID which will be used in Figuresand Tables throughout is SJ.NAdenotes natural motion of themocap sequences, BDandBMare baseline systems in a dyadicand monadic setting respectively. We give a brief overview of eachevaluation method, however, we strongly recommend also readingthe main challenge paper [11] for full details.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9Table 2: Summary statistics of user-study ratings from thehuman-likeness study, with confidence intervals at the levelα= 0.05. Conditions are ordered by decreasing sample medianrating. Our model results are highlighted in pink . Table andcaption from [11]....over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between condi-tions in human-likeness study. White means that the condi-tion listed on the y-axis rated significantly above the condi-tion on the x-axis, black means the opposite ( yrated below x),and grey means no statistically significant difference at thelevelα= 0.05 after Holm-Bonferroni correction. Conditionsare listed in the same order as in Table 2. Figure and captionfrom [11].ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.5.1 Human LikenessThis user-study aims to evaluate how human-like the motion gen-erated is, independent of the speech. Although each comparisonsystem motion corresponds to the same input speech and condi-tioning, these sequences were muted to ensure ratings can onlydepend on the motion seen in the videos. 8 systems were comparedat any one time and participants were asked “Please indicate on asliding scale how human-like the gesture motion appears”. Studyparticipants gave their ratings in response to this question on ascale from 0 (worst) to 100 (best).Summary statistics (median, mean) are shown in Table 2 andsignificance comparisons are provided in Figure 3. Our system(SJ) was evaluated to be the third highest ranking of submittedsystems with regards to mean and median human likeness score.Figure 3 shows only NA,SGandSFare significantly better thanour system. Our system scores significantly higher than 9 othersystems, including both baseline systems.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803Table 3: Summary statistics of user-study responses from theappropriateness to speech study, with confidence intervalsfor the mean appropriateness score (MAS) at the level α= 0.05.“Pref. matched” identifies how often test-takers preferredmatched motion in terms of appropriateness, ignoring ties.Our model results are highlighted in pink . Table and cap-tion from [11].5.2 Speech AppropriatenessTo measure appropriateness of gestures to speech, participantswere asked to view two videos and answer “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?”. Both video stimuli are fromthe same condition and thus ensure the same motion quality, butone matches the speech and the other is mismatched, generatedfrom an unrelated speech sequence. Five response options wereavailable, namely “Left is clearly better”, “Left is slightly better”,“They are equal”, “Right is slightly better”, and “Right is clearlybetter”. Each answer is assigned a value of -2, -1, 0, 1, 2 where anegative value is given for a preference to mismatched motion anda positive value for a preference to matched motion.Table 3 provides summary statistics and win rates, Figure 4visualises the response distribution and Figure 5 shows significancecomparisons. Our approach ( SJ) ranked second in the submittedsystems. Figure 5 shows that there are few significant differencesbetween pairwise systems. Only SGand the natural mocap ( NA)rank significantly better than our system. Again, our system rankssignificantly better than 9 other conditions including the dyadicbaseline system.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualising the response distribution inthe appropriateness to speech study. The blue bar (bottom)represents responses where subjects preferred the matchedmotion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score. Figure and caption from [11].5.3 Interlocutor AppropriatenessAs this year’s challenge includes awareness of the interlocutorspeech and motion, the appropriateness of the generated main-agent motion to the interlocutor’s speech is also evaluated. Thewas done using a similar technique used for measuring speech ap-propriateness but differed in several important aspects. The test datacontained pairs of interactions, one with matched main-agent andinterlocutor interactions and another with the same main-agentspeech, but mismatched interlocutor speech. Preference can bequantified for generated motion with matched over mismatched in-terlocutor behaviour and we can assess how interlocutor behaviouraffects the motion.Our system ranked 8th in this study but only natural mocap, SA,BDandSLare rated significantly higher than it. There is no othersignificant difference to any other system, except SHwhere we weresignificantly better. We observe from the statistics in Figure 7 thatour system had the lowest number of negative scores (preferencefor the mismatched dyadic interaction), and a large number of nopreference scores.The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Significance of pairwise differences between con-ditions in the appropriateness to speech evaluation. Whitemeans that the condition listed on the y-axis rated signifi-cantly above the condition on the x-axis, black means theopposite (yrated below x), and grey means no statistically sig-nificant difference at the level α= 0.05 after Holm-Bonferronicorrection. Conditions are listed in the same order as in Table3. Figure and caption from [11].Cond-MASPref. Raw response countition matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 4: Summary statistics of user-study responses from theappropriateness to interlocutor study, with confidence inter-vals for the mean appropriateness score (MAS) at the level α= 0.05. “Pref. matched” identifies how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties. Our model results are highlighted in pink . Table andcaption from [11].NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 6: Significance of pairwise differences between con-ditions in the appropriateness to interlocutor study. Whitemeans that the condition listed on the y-axis rated signifi-cantly above the condition on the x-axis, black means theopposite (yrated below x), and grey means no statistically sig-nificant difference at the level α= 0.05 after Holm-Bonferronicorrection. Conditions are listed in the same order as in Fig-ure 4. Figure and caption from [11].NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 7: Bar plots visualising the response distribution in theappropriateness to interlocutor study. The blue bar (bottom)represents responses where subjects preferred the matchedmotion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score. Figure and caption from [11].ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.5.4 ObservationsWe observe that the animation generated from our model is smoothand temporally coherent without jitter or sudden shifts in motionwhile maintaining gesture beats in time with speech. Our modelappears to reliably and realistically animate beat gestures. Beatgestures are simple and fast movements of the hands and havea close relationship to prosodic activity such as acoustic energyand pitch [ 20,27]. The PASE+ model used for encoding audio inour system was trained to estimate prosodic features as one of itsdownstream tasks, making the derived audio features particularlysuitable for animating beat gestures.We do not expect gestures to occur during every audio beat,but when they happen they should synchronise with the speech.Using the method of motion and audio beat extraction used in thebeat align score calculation presented in Liu et al. [ 13], we canvisualise the onset of audio beats and motion gesture over time.Figure 8 shows two well timed gestures for a 3 second audio clip.The utterance of “programs” shows a beat gesture where duringthe syllable utterance “pro”, the speaker moves their right handfrom right to left and as the stressed syllable “grams” is spoken,the hand begins to change velocity and move from left to right. Wealso see an example of muted speech where our model continues toperform well. As there is no speech, there is little to inform gesture,we find the right arm drops to the side, and left arm lowers slightly.However, as the speech begins again, both arms raise in time withthe speech.A difference between natural mocap motion and our generatedanimation is that the latter does not exhibit sporadic, non-speechrelated motion such as self-adaptor traits. Self-adaptors are move-ments that typically include self-touch, such as scratching of theneck, clasping at an elbow, adjusting hair or interlocking fingers[18]. Despite the indirect relationship between these behavioursand speech, these traits are linked to perceived emotional stabilityof an agent [18] and may influence perceived human-likeness.6 DISCUSSIONOur approach performed well with regards to human-likeness andappropriateness to speech. Our model performed comparably to10 of the other systems with regards to appropriateness to the in-terlocutor’s speech, but clearly it can be improved in this area. Weobserve in Figure 7 and Table 4 that, for our system, participantspreferred the mismatched stimuli least compared to all other sys-tems (including natural mocap). The majority of responses weretied, meaning that they considered the mismatched stimuli to be ofequal appropriateness as the matched animation. It is unclear wherethis uncertainty stems from and more work is required to evaluatethis cause. There may be a lack of influence from the interlocutorspeech in this model architecture. There are many ways to incorpo-rate the interlocutor speech in this model, for example including asan extra input to the self-attention rather than as cross-attentionor altering skip connections. These ideas or simply increasing thenumber of cross-attention layers may improve the performance ofthe appropriateness to the interlocutor.More experiments are also required to determine the impactof including the interlocutor information on human-likeness andappropriateness to speech as well as appropriateness to interlocutor.P r og r a m s<mute> medicalFigure 8: Generated gestures for given audio beats. Using a3s audio clip from the test dataset we show the audio spec-trogram, as well as aligned audio beat onsets and their cor-responding onset strengths as well as motion gesture onsetdetection of the right wrist using the method of beat detec-tion defined in Liu et al. [ 13]. We can see during the syllableutterance “pro”, the speaker moves their right hand handfrom right to left and as the stressed syllable “grams” is spo-ken, the hand begins to move left to right. When there issilence, the arms begin to rest and again gesture in the nextutterance.This may have a positive effect on these two evaluations or maylimit performance in these areas.Although our proposed method is deterministic, i.e. the sameinputs will always produce the same outputs, it could be possible toincorporate this design into a probabilistic model. For example, thisapproach could be adjusted to incorporate probabilistic diffusion[8, 19] methods.7 CONCLUSIONWe have presented our submission to the GENEA Challenge 2023,a modified Transformer-XL based approach that utilises both self-attention and cross-attention. Our solution generates smooth, tem-porally coherent animation from the conversational speech of amain-agent and interlocutor. Subjective evaluation results supportthat our system performs well in regards to human-likeness andappropriateness, ranking third and second respectively when com-pared to the 14 other systems and baselines and performing signifi-cantly better than 9 in both evaluations. Our approach continues tobe competitive when evaluating the generated main-agent motion’sappropriateness to the interlocutor, where only the natural mocapand 3 systems performed significantly better.The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, France
hfKuOoyfd3
This paper describes a method to synthesize dyadic gestures given a speech, an audio and a speaker identity label. The authors have used Transformer XL with self attention and cross attention to do so. The paper is well written and technically sound.
7: Good paper, accept
The paper is well-organized and clearly written. The overall design and technicality of the method seem plausible, and the figures also help to understand the method pretty well. The authors provide sufficient experiments, both statistical and subjective evaluations, in the paper to back up their claims. A few points to note: 1. I felt that Sections 4.2 and 4.3 could be shortened as the authors only describe the self-attention and cross-attention mechanism in a very generic way which can already be found in other papers. 2. Does using loss on velocity, acceleration, and kinetic energy gets additional benefit? Or is there a chance of overfitting the training data by over-parameterizing these losses? Overall, I think the solution is well thought out and has potential.
3: The reviewer is fairly confident that the evaluation is correct
mK2qMNf0_Nd
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Co-Speech Gesture Generation via Audio and Text Feature Engineering
["Geunmo Kim", "Jaewoong Yoo", "Hyedong Jung"]
In recent years, the field of human-computer interaction (HCI) research has seen increasing efforts to model social intelligence and behavior based on artificial intelligence. For human-agent communication to evolve in a ”human-way”, non-verbal features can be used as important factors. We conducted our research as part of the GENEA Challenge 2023, where the task is to generate human gestures using these non-verbal elements. We applied two main approaches to generating natural gestures. First, we modified the provided baseline model to apply RoBERTa-based speech transcription embedding, and second, we designed a gesture generation model by adding a zero-crossing rate and rhythmical features to the input features. The gestures generated by this method were evaluated as unnatural in terms of human-like and conformity. However, through this, we will study the SOTA model structure of gesture generation in the future and apply various preprocessing methods to the input data to generate natural gestures.
["Human-computer interaction (HCI)", "Gesture generation", "Deep learning", "Multimodal Learning"]
ABSTRACTIn recent years, the field of human-computer interaction (HCI) re-search has seen increasing efforts to model social intelligence andbehavior based on artificial intelligence. For human-agent commu-nication to evolve in a human-way, non-verbal features can beused as important factors. We conducted our research as part ofthe GENEA Challenge 2023[ 13], where the task is to generate hu-man gestures using these non-verbal elements. We applied twomain approaches to generating natural gestures. First, we modi-fied the provided baseline model to apply RoBERTa-based speechtranscription embedding, and second, we designed a gesture gen-eration model by adding a zero-crossing rate and rhythmical fea-tures to the input features. The gestures generated by this methodwere evaluated as unnatural in terms of human-like and confor-mity. However, through this, we will study the SOTA model struc-ture of gesture generation in the future and apply various prepro-cessing methods to the input data to generate natural gestures.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) .KEYWORDSHuman-Computer Interaction (HCI), Gesture Generation, Deep Learn-ing, Multimodal LearningACM Reference Format:Geunmo Kim, Jaewoong Yoo, and Hyedong Jung. 2023. Co-Speech GestureGeneration via Audio and Text Feature Engineering. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23 Companion), Oc-tober 9–13, 2023, Paris, France. ACM, New York, NY, USA, 6pages. https://doi.org/10.1145/3610661.36165531 INTRODUCTIONIn recent years, the field of Human-Computer Interaction (HCI) re-search has seen an increase in efforts to model social intelligenceand behavior based on artificial intelligence[ 2,3]. According to Al-bert Mehrabian’s Three elements of communication[ 20], humansPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanthe author(s) must be honored. Abstracting with credit is permitted. To copy other-wise, or republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10...$15.00https://doi.org/10.1145/3610661.3616553rely more on para-verbal and non-verbal elements of communica-tion than on verbal elements. In order for human-agent commu-nication to evolve towards the human-way, para-verbal and non-verbal behavioral cues can be used as important elements. Peopleusually express social signals and behaviors through non-verbalbehavioral cues such as facial expressions, body postures and ges-tures, or para-verbal behavioral cues such as tone and pitch fromvocal sounds[ 26]. According to Vinciarelli et al. (2009)[ 26], 90% ofnonverbal behavioral cues are associated with speech. Therefore,assuming that a matching gesture exists based on audio and speechdata, we will participate in the GENEA Challenge 2023 and pro-ceed with the co-speech gesture generation task. The generated co-speech gestures can be utilized for multi-modal fusion by consid-ering matching and combining verbal, para-verbal, and non-verbalfeatures in future research on human-agent communication.In traditional gesture generation research, motion system frame-works have been proposed as concatenative approaches such asmotion graphs[ 10]. In recent years, learning-based approaches havebeen used to generate high-quality and interactive gestures by uti-lizing neural networks such as FFNNs, RNNs, GANs, and VAEs[ 6,8,11,22,24]. There are also studies on gesture generation tasksusing text, speaker identity and style, and personality parametersas input features for generation models[ 1,12,23,27]. In GENEAChallenge 2023, our team applied two main approaches to achievea more natural and appropriate matching with speech. First, wemodified the provided baseline model with RoBERTa-based embed-ding for speech transcription, and second, we designed a gesturegeneration model by adding a zero-crossing rate and rhythmicalfeature as additional audio features to the input features.As a result, it was evaluated as unnatural for human-likenessand appropriateness. After checking with a 3D animation tool, wefound that there were some natural gestures, but most of themwere inappropriate for speech. Through this experiment, we real-ized that using more features does not always lead to better gener-ation performance.2 BACKGROUND AND PRIOR WORK2.1 Data-driven gesture generation researchData-driven gesture generation models are models that learn froma large amount of data, such as audio, text, and pose data, and gen-erate gestures that correspond to the data. There are a variety ofstudies [ 7][18][19][29] that use data-driven generative models togenerate gestures.Habibie, Ikhsanul, et al [ 7] combined the benefits of databasematching and adversarial learning to generate 3D gestures. The pa-per used the k-Nearest Neighbors (k-NN) algorithm to consider theICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.similarity between the correct audio-pose data stored in the data-base and the input data. Based on this, the correct audio-pose datastored in the database is sequentially searched to find the data withthe highest similarity to the input data. Then, a Conditional Gener-ative Adversarial Network (cGAN) model[ 21] was used to generategestures corresponding to the input data. Unlike the GAN model,the cGAN model can use additional information such as the labelof the input data to generate the desired data while the generatorand discriminator are training. Therefore, the paper used the re-sults of the k-NN algorithm as additional information to generategestures corresponding to the input data.Lu, Shuhong, et al [ 18] used the encoder structure of Liu, Xian,et al [ 17] to extract features from text and audio, and the Vector-Quantized Variational AutoEncoder (VQ-VAE) model[ 25] to extractgesture features. The VQ-VAE model is a model that applies vectorquantization (VQ) to the VAE model. Vector quantization is a tech-nique that uses an algorithm similar to K-means clustering to re-place continuous probability values with discrete values. By doingso, we converted the latent values of the gesture data into low-dimensional vectors. As a result, we generate gestures similar tothe input data by learning low-dimensional latent variables thatbetter represent the features of the gesture data.Lu, Shuhong, et al [ 19] considered the problem that when gener-ating gestures based on speech data, multiple gestures may be gen-erated for the same speech data. To solve this problem, they used in-dividual gesture tokens and a Residual-Quantized Variational Au-doencoder (RQ-VAE) model[ 14]. By using discrete gesture tokens,we solved the mapping problem of gesture generation by assign-ing different probabilities to different gestures generated based onthe same speech data. We also used the RQ-VAE model to train thediscrete gesture tokens. The RQ-VAE model recursively discretizesthe latent variables in the input data to reduce the loss of infor-mation as the encoding progresses. This resulted in higher-qualitygestures.Zhang, Fan, et al [ 29] proposed the DiffMotion model based onthe diffusion model for gesture generation. The DiffMotion modelconsists of an Autoregressive Temporal Encoder (AT-Encoder) anda Denoising Diffusion Probabilistic Module (DDPM). The AT-Encoderuses a multi-layer LSTM structure to encode the temporal contextof the speech data. Then, through the diffusion and generation pro-cess of the DDPM model, it learned a one-to-many mapping of in-put data and gestures and generated new gestures.2.2 Multimodal gesture generation researchMultimodal-based research utilizes various types of data throughmultiple modalities to overcome the limitations of using only a sin-gle type of data for learning. Feature vectors are extracted usinga deep learning structure suitable for each modality, and multipletasks are performed based on them. Multimodal-based gesture gen-eration research uses audio, text, and pose data as input data foreach modality to extract feature vectors and utilize them to gener-ate gestures that correspond to the input data. Various studies usethis multimodal structure to generate gestures.Kim, Gwantae, et al [ 9] proposed a new framework, MultimodalPretrained Encoder for Feature generation (MPE4G), to generatenatural gestures using (speech, text, motion) as input data for mul-timodal structures. This framework solves the problem of inaccu-rate gesture generation when there is noise in the input data usedfor training. To achieve this, the proposed framework consists ofthree main steps. First, a frame-by-frame embedder and generatorare trained with joint embedding loss and reconstruction loss. Sec-ond, a multimodal encoder is trained with a self-supervised learn-ing approach. Third, the embedder, encoder, decoder, and genera-tor are jointly trained using supervised learning. Based on thesecomponents, we not only achieved good performance in gesturegeneration but also solved problems such as noise in the input dataand generated natural gestures that respond to the input data.3 METHODOur model structure for gesture generation is based on [ 4]. Ourmodel structure consists of an encoder, an attachment, and a de-coder, as shown in the following figure 1.The encoder consists of character embedding, three 1d convolu-tion layers, and a bi-directional LSTM. When a one-hot vector isinput, it is converted into an embedding vector through characterembedding. It is then converted to an encoded feature through aconvolutional layer and a bi-directional LSTM. Attention is the pro-cess of aligning what information to get from the encoder by usingthe encoded features from the encoder and the features generatedat the previous point in the decoder’s LSTM. In our model, we usea locality constraint attention like [ 4]. The decoder consists of two(Fully connected layer + ReLU), a uni-directional LSTM, a Fullyconnected layer, and five convolutional layers. The alignment fea-ture information obtained through attention and the gesture fea-ture generated at the previous time is used to generate the gesturefeature at the next time. Through this process, gestures correspond-ing to the input data are generated.For gesture generation, we built on the aforementioned modelstructure and focused on input features. First, to vary the text fea-tures, we used RoBERTa-based (784 dimensions) pretrained withword embeddings. Next, we used mfcc, mel-spectrogram, pitch,and energy, which are commonly used audio features, as well aszero-crossing rate and rhythmical features.We used two NVIDIA A100-SXM4-80GB GPUs to train the afore-mentioned models. For both Monadic and Dyadic, we trained fora total of 25,000 iterations and set the learning rate to 1e-4. Wealso used a weight decay value of 1e-6 and a batch size of 64 tomatch the GPU memory. For the optimizer and loss function usedfor training, we used the most popular Adam optimizer and MSEloss function.3.1 Data and data processingWe trained our model using a dataset [ 15] provided by GENEAChallenge 2023. The dataset is based on the Talking With Hands16.2M gesture dataset, which are audio and motion capture dataof several pairs of people talking freely about various topics. Thedataset consists of 372 training datasets and 41 validation datasets.The training and validation datasets contain motion capture data(BVH format), audio (WAV format), and transcript (CSV format)data corresponding to the motion, and speaker id (CSV format)data, respectively. Since GENEA Challenge 2023[ 13] considers notCo-Speech Gesture Generation via Audio and Text Feature Engineering ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: Our Proposed Architectureonly monadic but also dyadic situations, unlike GENEA Challenge2022[ 28], the training and validation datasets include the main-agent and additionally the interlocutor.3.1.1 Motion. We extracted features from the motion using PyMolibrary for gesture generation. The motion FPS is 30. The team usedan exponential map[ 5] to represent 3D motion. Unlike GENEAChallenge 2022, GENEA Challenge 2023 evaluates only the fullbody[ 13]. Therefore, we utilised the motion features correspond-ing to the full body using the root position and 19 keypoints in theupper body and 6 keypoints in the lower body. Therefore, the fullbody has 78 dimensions.3.1.2 Audio. We extracted several features from the audio for ges-ture generation. The sample rate of the audio is 44100 Hz. First,we used mfcc, mel-spectrogram, and prosody (energy, pitch) fea-tures, which are widely used in gesture generation research[ 16].We also used zero-crossing rate and rhythmical feature in additionto the aforementioned features because we believe that gesturesare highly related to audio. In the case of zero-crossing rate, the di-rection and shape of the gesture can be determined, so we thoughtthat audio with a high zero-crossing rate could be used to generategently waving gestures, etc. In the case of rhythmical feature, wethought that if the rhythm of the audio is uniform, the correspond-ing gesture will also have a smooth shape.The characteristics of the six features mentioned above are asfollows. For mfcc, mel-spectrogram, zero-crossing rate, and rhyth-mical features, the Librosa library was used. The prosody featurewas extracted using the Parselmouth library. mfcc, mel-spectrogram,zero-crossing rate, and rhythmical features were all extracted us-ing a hop length of 1470 on the audio. The mel-spectrogram wasextracted by specifying the number of filter banks as 64, and themfcc was extracted using 40 dimensions. Thus, the features ex-tracted from the audio for model training are mfcc (40 dimensions),mel-spectrogram (64 dimensions), prosody (4 dimensions), zero-crossing rate (1 dimension), and rhythmical feature (384 dimen-sions).3.1.3 Text. We used pretrained word embedding to extract fea-tures from the text for gesture generation. For word embedding, weused the RoBERTa-based model (784 dimensions). The RoBERTa-based model is a Transformer-based language model that performsbetter than BERT by applying several improvements. Unlike BERT,it does not use masking during the training process, which short-ens the training time and improves performance. It also showsbetter generalization performance by using layer regularization,which is one of the techniques to prevent model overfitting dur-ing the training process. We used the RoBERTa-based model asour word embedding model.The text features used to train the model were extracted usingthe transcripts contained in the provided dataset. Each text datawas preprocessed with a word embedding model, and all OOVwords were zeroed. In addition, we used metadata information suchas the speaker’s ID and the presence or absence of finger joints.4 EVALUATIONGENEA Challenge 2023 was slightly different from GENEA Chal-lenge 2022 in that it was evaluated on three different aspects:•Human-likeness : How human-like the gestures are, regard-less of the speech•Appropriateness for agent speech : Evaluation of natu-ral gestures for speech of the interlocutor, while consideringhuman-likeness.•Appropriateness for the interlocutor : Evaluate whetherthe interlocutor shows appropriate gestures to match thespeech of the interlocutor, while considering human-likeness.4.1 Result and DiscussionThe test dataset used to compare and analyze the performance ofour gesture generation model was provided by GENEA Challenge2023. Unlike GENEA Challenge 2022, we also considered dyadicsituations, so the dataset used to generate gestures for the main-agent includes motion, audio, and text data for the interlocutor.We submitted the motion data generated using the test dataset toGENEA Challenge 2023 for evaluation and received the followingevaluation results.4.1.1 Human-likeness. Table 1 shows the results of the human-likeness evaluation. Our submission falls into the SC submission,ICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.and as can be seen in Table 1, it was evaluated as an unnaturalgesture in terms of human-likeness. To analyze these results, wevisualized some of the gestures generated by our model using a3D animation tool called Blender. When we checked the visual-ized gestures, we found that our model produced several unnatu-ral gestures, such as the gesture with the right arm fixed (left inFigure 2) and the gesture with the right arm bent behind the head(right in Figure 2), as shown in Figure 2. This confirmed that ourmodel produced a large number of unnatural gestures, as shownin Table 1. We also confirmed that simply increasing the numberof input features, which was the focus of our research, can havea detrimental effect on the model’s ability to generate gestures bylearning unnecessary information.Condi- Human-likenesstion Median MeanNA 71∈ [70,71]68 .4±1.0SG 69∈ [67,70]65 .6±1.4SF 65∈ [64,67]63 .6±1.3SJ 51∈ [50,53]51 .8±1.3SL 51∈ [50,51]50 .6±1.3SE 50∈ [49,51]50 .9±1.3SH 46∈ [44,49]45 .1±1.5BD 46∈ [43,47]45 .3±1.4SD 45∈ [43,47]44 .7±1.3BM 43∈ [42,45]42 .9±1.3SI 40∈ [39,43]41 .4±1.4SK 37∈ [35,40]40 .2±1.5SA 30∈ [29,31]32 .0±1.3SB 24∈ [23,27]27 .4±1.3SC 9∈ [ 9,9]11 .6±0.9Table 1: The table of statistics for the human-likeness evalu-ation, with confidence intervals at the level α=0.05. Condi-tions are ordered by decreasing sample median rating.Figure 2: Visualisation of the unnatural generated gestures4.1.2 Appropriateness. Table 2 shows the evaluation results in termsof appropriateness for speech. For our submission, SC, the eval-uation result is an unnatural gesture that is not appropriate forspeech in terms of appropriateness to speech. As with human-likeness,we visualized the generated gestures to analyze the evaluation re-sults. When we checked the visualized gestures, we found that inmany cases we were unable to generate gestures that correspondedto the speech. The evaluation results and visualizations confirmedthat the zero-crossing rate and rhythmical features, which we usedas additional input features, require different preprocessing.Condi- 2*MAS Pref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC −0.02±0.04 49.1% 72 284 1057 314 76 1803Table 2: The table of statistics for the speech appropriatenessevaluation, with confidence intervals for the mean appropri-ateness score (MAS) at the level α=0.05. “Pref. matched”identifies how often test-takers preferred matched motionin terms of appropriateness, ignoring ties.Condi- 2*MAS Pref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM −0.01±0.06 49.9% 55 212 470 206 63 1006SJ −0.03±0.05 49.1% 31 157 617 168 39 1012SC −0.03±0.05 49.1% 34 183 541 190 45 993SK −0.06±0.09 47.4% 200 227 111 276 205 1019SG −0.09±0.08 46.7% 140 252 163 293 167 1015SH −0.21±0.07 44.0% 55 237 308 270 144 1014Table 3: The table of statistics for the evaluation of appropri-ateness for the interlocutor, with confidence intervals forthe mean appropriateness score (MAS) at the level α=0.05.“Pref. matched” identifies how often test-takers preferredmatched motion in terms of appropriateness, ignoring ties.Co-Speech Gesture Generation via Audio and Text Feature Engineering ICMI ’23 Companion, October 9–13, 2023, Paris, FranceTable 3 shows the results of our evaluation in terms of appropri-ateness, i.e., the ability to generate gestures that match the speechas information about the interlocutor is added. To analyze the eval-uation results, we visualized the gestures generated by our model.We found that our model did not generate appropriate gesturesfor the interlocutor, but unnatural gestures that were not relatedto the interlocutor’s information, such as monadic situations. Wethought that this could be improved by resolving the aforemen-tioned issues of human-likeness and appropriateness.After analyzing the results of the previous evaluation, we foundthat gesture generation based on input features, which is the fo-cus of our research, requires appropriate preprocessing for eachfeature rather than simply adding features. Although most of theevaluation results show unnatural gestures, we believe that our re-search has the potential for further development.5 CONCLUSION AND FUTURE WORKWe conducted a study to generate gestures according to input data(motion, audio, text) based on the model structure of [ 4]. As men-tioned earlier, we conducted experiments by changing the wordembedding and adding audio features based on the existing modelstructure. We did not focus on improving the performance of thegesture generation model, but rather on checking how gesturesare generated according to the input features. After training ourmodel in this way, we found that it produced low-quality gestureswhen evaluated. Through these results, we confirmed that the pre-processing method for each feature is important, not just increas-ing the number of input features, and we have the following plansto improve the performance of gesture generation by conductingexperiments with various research methods.We will conduct experiments by changing SOTA models suchas diffusion, RQ-VAE, and detailed hyper-parameters instead ofsimply using the model structure used in the past. We will alsoconduct experiments in a different way to compare and analyzethe performance of gesture generation according to the input fea-tures we focused on. In the past, we simply added features to learn,but in the future, we will conduct experiments by segmenting thefeatures of motion, audio, and text. For example, we will conductexperiments using only motion features, only audio features, anda combination of motion and audio features to see which featureshave the most impact on gesture generation.ACKNOWLEDGMENTSThis work was supported by Institute of Information communica-tions Technology Planning Evaluation (IITP) grant funded by theKorea government(MSIT) (2022-0-00043,Adaptive Personality forIntelligent Agents)
jJ_ao08WW3E
Substantial improvements are needed in the revised version
5: Marginally below acceptance threshold
[Paper Summary] This paper introduces a framework receiving motion features, audio features, and text features for producing co-speech motions of a target agent. The framework comprises an encoder, an attention model, and a decoder. The model performance was verified with respect to human-likeness and appropriateness metrics. The experimental results indicated that the proposed approach yields lower accuracy compared to related works in the GENEA 2023 challenge. The authors commented that further efforts should be investigated to improve the performance of this approach. [Comments to authors] Although the proposed solution seems potential, I believe the paper needs substantial improvements. I include serval concerns that the author may consider in the revised version. 1. My primary concern is about the contributions and the novelties of the proposed approach. What are the differences between the proposed approach compared to the baseline model? The authors stated that “First, we modified the provided baseline model with RoBERTa-based embedding for speech transcription, and second, we designed a gesture generation model by 80 adding a zero-crossing rate and rhythmical feature as additional audio features to the input features.”, but what is the motivation for such modifications? To improve the model performance? If that is the case, that should be clearly revealed by the experimental results. 2. The paper presentation is, overall, acceptable. However, the paper does not cover enough details that allow readers to understand the proposed approach. For instance: 2.1. How did the author construct the locality constraint attention? Have any modifications been made compared to the baseline? 2.2. Fig.1 does not give readers an overview of the proposed approach. The Talking with Hands 16.2M dataset contains social signals of both the target agent and the interlocutor, so which features that the authors were utilizing? 2.3. Basic information such as loss function, training parameters should be included 3. The paper objective is not really clear. What findings and take-away messages can readers obtain from the paper? For instance, on page 5, the authors stated, "Although most of the evaluation results show unnatural gestures, we believe that our research has the potential for further development.” What is the "potential" that the authors mentioned? Also, in conclusion, the authors claimed that “we did not focus on improving the performance of the gesture generation model, but rather on checking how gestures are generated according to the input features ” If so, did the author conduct an ablation experiment to examine the role of individual input features (motion, speech, and text transcription) in generated motions? 4. There are some typos in the paper. For instance, L189-192 on Page 2: “The alignment feature information obtained through attention and the gesture feature generated at the previous time is used to generate the gesture feature at the next time. ” did the author mean next motion frame instead of next time?
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
mK2qMNf0_Nd
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
Co-Speech Gesture Generation via Audio and Text Feature Engineering
["Geunmo Kim", "Jaewoong Yoo", "Hyedong Jung"]
In recent years, the field of human-computer interaction (HCI) research has seen increasing efforts to model social intelligence and behavior based on artificial intelligence. For human-agent communication to evolve in a ”human-way”, non-verbal features can be used as important factors. We conducted our research as part of the GENEA Challenge 2023, where the task is to generate human gestures using these non-verbal elements. We applied two main approaches to generating natural gestures. First, we modified the provided baseline model to apply RoBERTa-based speech transcription embedding, and second, we designed a gesture generation model by adding a zero-crossing rate and rhythmical features to the input features. The gestures generated by this method were evaluated as unnatural in terms of human-like and conformity. However, through this, we will study the SOTA model structure of gesture generation in the future and apply various preprocessing methods to the input data to generate natural gestures.
["Human-computer interaction (HCI)", "Gesture generation", "Deep learning", "Multimodal Learning"]
ABSTRACTIn recent years, the field of human-computer interaction (HCI) re-search has seen increasing efforts to model social intelligence andbehavior based on artificial intelligence. For human-agent commu-nication to evolve in a human-way, non-verbal features can beused as important factors. We conducted our research as part ofthe GENEA Challenge 2023[ 13], where the task is to generate hu-man gestures using these non-verbal elements. We applied twomain approaches to generating natural gestures. First, we modi-fied the provided baseline model to apply RoBERTa-based speechtranscription embedding, and second, we designed a gesture gen-eration model by adding a zero-crossing rate and rhythmical fea-tures to the input features. The gestures generated by this methodwere evaluated as unnatural in terms of human-like and confor-mity. However, through this, we will study the SOTA model struc-ture of gesture generation in the future and apply various prepro-cessing methods to the input data to generate natural gestures.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) .KEYWORDSHuman-Computer Interaction (HCI), Gesture Generation, Deep Learn-ing, Multimodal LearningACM Reference Format:Geunmo Kim, Jaewoong Yoo, and Hyedong Jung. 2023. Co-Speech GestureGeneration via Audio and Text Feature Engineering. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23 Companion), Oc-tober 9–13, 2023, Paris, France. ACM, New York, NY, USA, 6pages. https://doi.org/10.1145/3610661.36165531 INTRODUCTIONIn recent years, the field of Human-Computer Interaction (HCI) re-search has seen an increase in efforts to model social intelligenceand behavior based on artificial intelligence[ 2,3]. According to Al-bert Mehrabian’s Three elements of communication[ 20], humansPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanthe author(s) must be honored. Abstracting with credit is permitted. To copy other-wise, or republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10...$15.00https://doi.org/10.1145/3610661.3616553rely more on para-verbal and non-verbal elements of communica-tion than on verbal elements. In order for human-agent commu-nication to evolve towards the human-way, para-verbal and non-verbal behavioral cues can be used as important elements. Peopleusually express social signals and behaviors through non-verbalbehavioral cues such as facial expressions, body postures and ges-tures, or para-verbal behavioral cues such as tone and pitch fromvocal sounds[ 26]. According to Vinciarelli et al. (2009)[ 26], 90% ofnonverbal behavioral cues are associated with speech. Therefore,assuming that a matching gesture exists based on audio and speechdata, we will participate in the GENEA Challenge 2023 and pro-ceed with the co-speech gesture generation task. The generated co-speech gestures can be utilized for multi-modal fusion by consid-ering matching and combining verbal, para-verbal, and non-verbalfeatures in future research on human-agent communication.In traditional gesture generation research, motion system frame-works have been proposed as concatenative approaches such asmotion graphs[ 10]. In recent years, learning-based approaches havebeen used to generate high-quality and interactive gestures by uti-lizing neural networks such as FFNNs, RNNs, GANs, and VAEs[ 6,8,11,22,24]. There are also studies on gesture generation tasksusing text, speaker identity and style, and personality parametersas input features for generation models[ 1,12,23,27]. In GENEAChallenge 2023, our team applied two main approaches to achievea more natural and appropriate matching with speech. First, wemodified the provided baseline model with RoBERTa-based embed-ding for speech transcription, and second, we designed a gesturegeneration model by adding a zero-crossing rate and rhythmicalfeature as additional audio features to the input features.As a result, it was evaluated as unnatural for human-likenessand appropriateness. After checking with a 3D animation tool, wefound that there were some natural gestures, but most of themwere inappropriate for speech. Through this experiment, we real-ized that using more features does not always lead to better gener-ation performance.2 BACKGROUND AND PRIOR WORK2.1 Data-driven gesture generation researchData-driven gesture generation models are models that learn froma large amount of data, such as audio, text, and pose data, and gen-erate gestures that correspond to the data. There are a variety ofstudies [ 7][18][19][29] that use data-driven generative models togenerate gestures.Habibie, Ikhsanul, et al [ 7] combined the benefits of databasematching and adversarial learning to generate 3D gestures. The pa-per used the k-Nearest Neighbors (k-NN) algorithm to consider theICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.similarity between the correct audio-pose data stored in the data-base and the input data. Based on this, the correct audio-pose datastored in the database is sequentially searched to find the data withthe highest similarity to the input data. Then, a Conditional Gener-ative Adversarial Network (cGAN) model[ 21] was used to generategestures corresponding to the input data. Unlike the GAN model,the cGAN model can use additional information such as the labelof the input data to generate the desired data while the generatorand discriminator are training. Therefore, the paper used the re-sults of the k-NN algorithm as additional information to generategestures corresponding to the input data.Lu, Shuhong, et al [ 18] used the encoder structure of Liu, Xian,et al [ 17] to extract features from text and audio, and the Vector-Quantized Variational AutoEncoder (VQ-VAE) model[ 25] to extractgesture features. The VQ-VAE model is a model that applies vectorquantization (VQ) to the VAE model. Vector quantization is a tech-nique that uses an algorithm similar to K-means clustering to re-place continuous probability values with discrete values. By doingso, we converted the latent values of the gesture data into low-dimensional vectors. As a result, we generate gestures similar tothe input data by learning low-dimensional latent variables thatbetter represent the features of the gesture data.Lu, Shuhong, et al [ 19] considered the problem that when gener-ating gestures based on speech data, multiple gestures may be gen-erated for the same speech data. To solve this problem, they used in-dividual gesture tokens and a Residual-Quantized Variational Au-doencoder (RQ-VAE) model[ 14]. By using discrete gesture tokens,we solved the mapping problem of gesture generation by assign-ing different probabilities to different gestures generated based onthe same speech data. We also used the RQ-VAE model to train thediscrete gesture tokens. The RQ-VAE model recursively discretizesthe latent variables in the input data to reduce the loss of infor-mation as the encoding progresses. This resulted in higher-qualitygestures.Zhang, Fan, et al [ 29] proposed the DiffMotion model based onthe diffusion model for gesture generation. The DiffMotion modelconsists of an Autoregressive Temporal Encoder (AT-Encoder) anda Denoising Diffusion Probabilistic Module (DDPM). The AT-Encoderuses a multi-layer LSTM structure to encode the temporal contextof the speech data. Then, through the diffusion and generation pro-cess of the DDPM model, it learned a one-to-many mapping of in-put data and gestures and generated new gestures.2.2 Multimodal gesture generation researchMultimodal-based research utilizes various types of data throughmultiple modalities to overcome the limitations of using only a sin-gle type of data for learning. Feature vectors are extracted usinga deep learning structure suitable for each modality, and multipletasks are performed based on them. Multimodal-based gesture gen-eration research uses audio, text, and pose data as input data foreach modality to extract feature vectors and utilize them to gener-ate gestures that correspond to the input data. Various studies usethis multimodal structure to generate gestures.Kim, Gwantae, et al [ 9] proposed a new framework, MultimodalPretrained Encoder for Feature generation (MPE4G), to generatenatural gestures using (speech, text, motion) as input data for mul-timodal structures. This framework solves the problem of inaccu-rate gesture generation when there is noise in the input data usedfor training. To achieve this, the proposed framework consists ofthree main steps. First, a frame-by-frame embedder and generatorare trained with joint embedding loss and reconstruction loss. Sec-ond, a multimodal encoder is trained with a self-supervised learn-ing approach. Third, the embedder, encoder, decoder, and genera-tor are jointly trained using supervised learning. Based on thesecomponents, we not only achieved good performance in gesturegeneration but also solved problems such as noise in the input dataand generated natural gestures that respond to the input data.3 METHODOur model structure for gesture generation is based on [ 4]. Ourmodel structure consists of an encoder, an attachment, and a de-coder, as shown in the following figure 1.The encoder consists of character embedding, three 1d convolu-tion layers, and a bi-directional LSTM. When a one-hot vector isinput, it is converted into an embedding vector through characterembedding. It is then converted to an encoded feature through aconvolutional layer and a bi-directional LSTM. Attention is the pro-cess of aligning what information to get from the encoder by usingthe encoded features from the encoder and the features generatedat the previous point in the decoder’s LSTM. In our model, we usea locality constraint attention like [ 4]. The decoder consists of two(Fully connected layer + ReLU), a uni-directional LSTM, a Fullyconnected layer, and five convolutional layers. The alignment fea-ture information obtained through attention and the gesture fea-ture generated at the previous time is used to generate the gesturefeature at the next time. Through this process, gestures correspond-ing to the input data are generated.For gesture generation, we built on the aforementioned modelstructure and focused on input features. First, to vary the text fea-tures, we used RoBERTa-based (784 dimensions) pretrained withword embeddings. Next, we used mfcc, mel-spectrogram, pitch,and energy, which are commonly used audio features, as well aszero-crossing rate and rhythmical features.We used two NVIDIA A100-SXM4-80GB GPUs to train the afore-mentioned models. For both Monadic and Dyadic, we trained fora total of 25,000 iterations and set the learning rate to 1e-4. Wealso used a weight decay value of 1e-6 and a batch size of 64 tomatch the GPU memory. For the optimizer and loss function usedfor training, we used the most popular Adam optimizer and MSEloss function.3.1 Data and data processingWe trained our model using a dataset [ 15] provided by GENEAChallenge 2023. The dataset is based on the Talking With Hands16.2M gesture dataset, which are audio and motion capture dataof several pairs of people talking freely about various topics. Thedataset consists of 372 training datasets and 41 validation datasets.The training and validation datasets contain motion capture data(BVH format), audio (WAV format), and transcript (CSV format)data corresponding to the motion, and speaker id (CSV format)data, respectively. Since GENEA Challenge 2023[ 13] considers notCo-Speech Gesture Generation via Audio and Text Feature Engineering ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: Our Proposed Architectureonly monadic but also dyadic situations, unlike GENEA Challenge2022[ 28], the training and validation datasets include the main-agent and additionally the interlocutor.3.1.1 Motion. We extracted features from the motion using PyMolibrary for gesture generation. The motion FPS is 30. The team usedan exponential map[ 5] to represent 3D motion. Unlike GENEAChallenge 2022, GENEA Challenge 2023 evaluates only the fullbody[ 13]. Therefore, we utilised the motion features correspond-ing to the full body using the root position and 19 keypoints in theupper body and 6 keypoints in the lower body. Therefore, the fullbody has 78 dimensions.3.1.2 Audio. We extracted several features from the audio for ges-ture generation. The sample rate of the audio is 44100 Hz. First,we used mfcc, mel-spectrogram, and prosody (energy, pitch) fea-tures, which are widely used in gesture generation research[ 16].We also used zero-crossing rate and rhythmical feature in additionto the aforementioned features because we believe that gesturesare highly related to audio. In the case of zero-crossing rate, the di-rection and shape of the gesture can be determined, so we thoughtthat audio with a high zero-crossing rate could be used to generategently waving gestures, etc. In the case of rhythmical feature, wethought that if the rhythm of the audio is uniform, the correspond-ing gesture will also have a smooth shape.The characteristics of the six features mentioned above are asfollows. For mfcc, mel-spectrogram, zero-crossing rate, and rhyth-mical features, the Librosa library was used. The prosody featurewas extracted using the Parselmouth library. mfcc, mel-spectrogram,zero-crossing rate, and rhythmical features were all extracted us-ing a hop length of 1470 on the audio. The mel-spectrogram wasextracted by specifying the number of filter banks as 64, and themfcc was extracted using 40 dimensions. Thus, the features ex-tracted from the audio for model training are mfcc (40 dimensions),mel-spectrogram (64 dimensions), prosody (4 dimensions), zero-crossing rate (1 dimension), and rhythmical feature (384 dimen-sions).3.1.3 Text. We used pretrained word embedding to extract fea-tures from the text for gesture generation. For word embedding, weused the RoBERTa-based model (784 dimensions). The RoBERTa-based model is a Transformer-based language model that performsbetter than BERT by applying several improvements. Unlike BERT,it does not use masking during the training process, which short-ens the training time and improves performance. It also showsbetter generalization performance by using layer regularization,which is one of the techniques to prevent model overfitting dur-ing the training process. We used the RoBERTa-based model asour word embedding model.The text features used to train the model were extracted usingthe transcripts contained in the provided dataset. Each text datawas preprocessed with a word embedding model, and all OOVwords were zeroed. In addition, we used metadata information suchas the speaker’s ID and the presence or absence of finger joints.4 EVALUATIONGENEA Challenge 2023 was slightly different from GENEA Chal-lenge 2022 in that it was evaluated on three different aspects:•Human-likeness : How human-like the gestures are, regard-less of the speech•Appropriateness for agent speech : Evaluation of natu-ral gestures for speech of the interlocutor, while consideringhuman-likeness.•Appropriateness for the interlocutor : Evaluate whetherthe interlocutor shows appropriate gestures to match thespeech of the interlocutor, while considering human-likeness.4.1 Result and DiscussionThe test dataset used to compare and analyze the performance ofour gesture generation model was provided by GENEA Challenge2023. Unlike GENEA Challenge 2022, we also considered dyadicsituations, so the dataset used to generate gestures for the main-agent includes motion, audio, and text data for the interlocutor.We submitted the motion data generated using the test dataset toGENEA Challenge 2023 for evaluation and received the followingevaluation results.4.1.1 Human-likeness. Table 1 shows the results of the human-likeness evaluation. Our submission falls into the SC submission,ICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.and as can be seen in Table 1, it was evaluated as an unnaturalgesture in terms of human-likeness. To analyze these results, wevisualized some of the gestures generated by our model using a3D animation tool called Blender. When we checked the visual-ized gestures, we found that our model produced several unnatu-ral gestures, such as the gesture with the right arm fixed (left inFigure 2) and the gesture with the right arm bent behind the head(right in Figure 2), as shown in Figure 2. This confirmed that ourmodel produced a large number of unnatural gestures, as shownin Table 1. We also confirmed that simply increasing the numberof input features, which was the focus of our research, can havea detrimental effect on the model’s ability to generate gestures bylearning unnecessary information.Condi- Human-likenesstion Median MeanNA 71∈ [70,71]68 .4±1.0SG 69∈ [67,70]65 .6±1.4SF 65∈ [64,67]63 .6±1.3SJ 51∈ [50,53]51 .8±1.3SL 51∈ [50,51]50 .6±1.3SE 50∈ [49,51]50 .9±1.3SH 46∈ [44,49]45 .1±1.5BD 46∈ [43,47]45 .3±1.4SD 45∈ [43,47]44 .7±1.3BM 43∈ [42,45]42 .9±1.3SI 40∈ [39,43]41 .4±1.4SK 37∈ [35,40]40 .2±1.5SA 30∈ [29,31]32 .0±1.3SB 24∈ [23,27]27 .4±1.3SC 9∈ [ 9,9]11 .6±0.9Table 1: The table of statistics for the human-likeness evalu-ation, with confidence intervals at the level α=0.05. Condi-tions are ordered by decreasing sample median rating.Figure 2: Visualisation of the unnatural generated gestures4.1.2 Appropriateness. Table 2 shows the evaluation results in termsof appropriateness for speech. For our submission, SC, the eval-uation result is an unnatural gesture that is not appropriate forspeech in terms of appropriateness to speech. As with human-likeness,we visualized the generated gestures to analyze the evaluation re-sults. When we checked the visualized gestures, we found that inmany cases we were unable to generate gestures that correspondedto the speech. The evaluation results and visualizations confirmedthat the zero-crossing rate and rhythmical features, which we usedas additional input features, require different preprocessing.Condi- 2*MAS Pref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC −0.02±0.04 49.1% 72 284 1057 314 76 1803Table 2: The table of statistics for the speech appropriatenessevaluation, with confidence intervals for the mean appropri-ateness score (MAS) at the level α=0.05. “Pref. matched”identifies how often test-takers preferred matched motionin terms of appropriateness, ignoring ties.Condi- 2*MAS Pref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM −0.01±0.06 49.9% 55 212 470 206 63 1006SJ −0.03±0.05 49.1% 31 157 617 168 39 1012SC −0.03±0.05 49.1% 34 183 541 190 45 993SK −0.06±0.09 47.4% 200 227 111 276 205 1019SG −0.09±0.08 46.7% 140 252 163 293 167 1015SH −0.21±0.07 44.0% 55 237 308 270 144 1014Table 3: The table of statistics for the evaluation of appropri-ateness for the interlocutor, with confidence intervals forthe mean appropriateness score (MAS) at the level α=0.05.“Pref. matched” identifies how often test-takers preferredmatched motion in terms of appropriateness, ignoring ties.Co-Speech Gesture Generation via Audio and Text Feature Engineering ICMI ’23 Companion, October 9–13, 2023, Paris, FranceTable 3 shows the results of our evaluation in terms of appropri-ateness, i.e., the ability to generate gestures that match the speechas information about the interlocutor is added. To analyze the eval-uation results, we visualized the gestures generated by our model.We found that our model did not generate appropriate gesturesfor the interlocutor, but unnatural gestures that were not relatedto the interlocutor’s information, such as monadic situations. Wethought that this could be improved by resolving the aforemen-tioned issues of human-likeness and appropriateness.After analyzing the results of the previous evaluation, we foundthat gesture generation based on input features, which is the fo-cus of our research, requires appropriate preprocessing for eachfeature rather than simply adding features. Although most of theevaluation results show unnatural gestures, we believe that our re-search has the potential for further development.5 CONCLUSION AND FUTURE WORKWe conducted a study to generate gestures according to input data(motion, audio, text) based on the model structure of [ 4]. As men-tioned earlier, we conducted experiments by changing the wordembedding and adding audio features based on the existing modelstructure. We did not focus on improving the performance of thegesture generation model, but rather on checking how gesturesare generated according to the input features. After training ourmodel in this way, we found that it produced low-quality gestureswhen evaluated. Through these results, we confirmed that the pre-processing method for each feature is important, not just increas-ing the number of input features, and we have the following plansto improve the performance of gesture generation by conductingexperiments with various research methods.We will conduct experiments by changing SOTA models suchas diffusion, RQ-VAE, and detailed hyper-parameters instead ofsimply using the model structure used in the past. We will alsoconduct experiments in a different way to compare and analyzethe performance of gesture generation according to the input fea-tures we focused on. In the past, we simply added features to learn,but in the future, we will conduct experiments by segmenting thefeatures of motion, audio, and text. For example, we will conductexperiments using only motion features, only audio features, anda combination of motion and audio features to see which featureshave the most impact on gesture generation.ACKNOWLEDGMENTSThis work was supported by Institute of Information communica-tions Technology Planning Evaluation (IITP) grant funded by theKorea government(MSIT) (2022-0-00043,Adaptive Personality forIntelligent Agents)
I6a1qRx1JW2
The idea is good but further engineering effort is needed.
4: Ok but not good enough - rejection
The authors proposes to use text embeddings of pretrained models, i.e., roBERTa, instead of traditional word vectors, i.e., FastText, to try to improve the naturalness and the matching between speech and the generated gestures. The authors also tries to include zero-crossing rate and what they called "rhythmical features" for the same goal. Although similar approaches have been used in other previous researches, i.e., using pretrained models, a complete and comprehensive comparison study of the effects of adopting these pretrained models in gesture generation task, and a guidance on how should gesture generation incorporate pretrained models are missing. It is good to see that the authors are making efforts on this direction. Their results could be useful for building more powerful gesture generation systems. Unfortunately, from the results it is obvious that the authors have failed in training their model to work. Their model's outputs are evaluated as very unnatural. It is hard to conduct ablation study when the model is not working normally. As a results, the paper may not provide valuable information for the relevant fields. Thus, I feel like this paper is below the acceptance threshold in its current form. Lastly, the paper should provide descriptions for techniques whose reference is not provided, i.e., what is "rhythmical features"? Or just providing a reference will suffice.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
FovoQL3nygw
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation
["Leon Harz", "Hendric Vo\u00df", "Stefan Kopp"]
Human communication relies on multiple modalities such as verbal expressions, facial cues, and bodily gestures. Developing computational approaches to process and generate these multimodal signals is critical for seamless human-agent interaction. A particular challenge is the generation of co-speech gestures due to the large variability and number of gestures that can accompany a verbal utterance, leading to a one-to-many mapping problem. This paper presents an approach based on a Feature Extraction Infusion Network (FEIN-Z) that adopts insights from robot imitation learning and applies them to co-speech gesture generation. Building on the BC-Z architecture, our framework combines transformer architectures and Wasserstein generative adversarial networks. We describe the FEIN-Z methodology and evaluation results obtained within the GENEA Challenge 2023, demonstrating good results and significant improvements in human-likeness over the GENEA baseline. We discuss potential areas for improvement, such as refining input segmentation, employing more fine-grained control networks, and exploring alternative inference methods.
["machine learning", "deep learning", "co-speech gesture generation", "gesture synthesis", "multimodal data", "transformer", "behavior cloning", "reinforcement learning"]
ABSTRACTHuman communication relies on multiple modalities such as verbalexpressions, facial cues, and bodily gestures. Developing compu-tational approaches to process and generate these multimodal sig-nals is critical for seamless human-agent interaction. A particularchallenge is the generation of co-speech gestures due to the largevariability and number of gestures that can accompany a verbalutterance, leading to a one-to-many mapping problem. This paperpresents an approach based on a Feature Extraction Infusion Net-work (FEIN-Z) that adopts insights from robot imitation learningand applies them to co-speech gesture generation. Building on theBC-Z architecture, our framework combines transformer architec-tures and Wasserstein generative adversarial networks. We describethe FEIN-Z methodology and evaluation results obtained within theGENEA Challenge 2023, demonstrating good results and significantimprovements in human-likeness over the GENEA baseline. Wediscuss potential areas for improvement, such as refining inputsegmentation, employing more fine-grained control networks, andexploring alternative inference methods.CCS CONCEPTS•Human-centered computing →Interactive systems andtools ;Empirical studies in interaction design ;HCI theory, conceptsand models ;•Computing methodologies →Neural networks ;Learning latent representations ;Unsupervised learning .KEYWORDSmachine learning; deep learning; co-speech gesture generation;gesture synthesis; multimodal data; transformer; behavior cloning;reinforcement learningACM Reference Format:Leon Harz∗, Hendric Voß∗, and Stefan Kopp. 2023. FEIN-Z: Autoregres-sive Behavior Cloning for Speech-Driven Gesture Generation. In INTER-NATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23),October 9–13, 2023, Paris, France. ACM, New York, NY, USA, 10 pages.https://doi.org/10.1145/3577190.3616115∗Both authors contributed equally to the paperPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.36161151 INTRODUCTIONHuman communication is a multifaceted process that relies onvarious modalities, including verbal expressions, facial cues, andbodily gestures. Combining these modalities allows us to conveycomplex messages and facilitate meaningful interactions [ 9,50].Consequently, the development of machines that can process andgenerate these multi-modal signals is crucial to enable seamlessinteraction between humans and agents. A key aspect that makesgesture generation particularly challenging is the existence of multi-ple valid gestures for a given interaction. Unlike verbal expressions,which often have a single intended meaning, gestures can conveydifferent nuances and interpretations, leading to a one-to-manymapping problem [ 41]. Capturing this inherent variability and gen-erating contextually appropriate gestures is a complex task thatrequires careful consideration. The importance of gesture genera-tion extends beyond research to practical applications in real-worldscenarios and virtual environments. In human-robot interaction,gestures play a crucial role in enhancing communication and fa-cilitating natural interactions between humans and robotic agents[56]. Similarly, in virtual reality, realistic and expressive gesturescontribute to immersion and engagement, enabling more intuitiveand compelling experiences [ 35]. Therefore, the development ofrobust and effective gesture-generation methods has great potentialfor improving various areas of human-machine interaction.In this work, we propose the FEIN-Z framework, a combinationof the proposed Feature Extraction Infusion Network (FEIN) and thezero-shot learning aspect of the BC-Z architecture (Z). Inspired byrecent achievements in robotic imitation learning, we extend theBC-Z approach [ 27] intended to generalize robotic manipulationtasks to unseen problems, to the co-speech gesture generation do-main. As transformer architectures have shown promising resultsin a wide variety of domains [ 17,48], including co-speech gesturegeneration [ 38], we replace and extend multiple components of theoriginal BC-Z approach with a transformer architecture. Gener-ative adversarial networks (GAN) are widely used in the roboticand co-speech gesture generation domain [ 20,52]. Building uponthe insight gained from recent approaches [ 52], we propose to usea Wasserstein generative adversarial networks (WGAN) with aWasserstein divergence objective to guide our framework to gener-ate natural and expressive gestures. The released evaluation resultsof the GENEA Challenge 2023 show that our framework outper-forms the challenge baseline with regard to human-likeness bya significant margin and ranks in the top half of all evaluated ap-proaches [ 31]. In the next sections, we will first give a brief overviewof the existing work and current achievements of co-speech gestureICMI ’23, October 9–13, 2023, Paris, France Harz et al.generation (Section 2), before detailing the proposed FEIN-Z archi-tecture, the individual components, the data processing, and ourtraining procedure (Section 3). Finally, we will discuss the results ofthe performed evaluation (Section 4) and conclude with an outlookfor possible improvements of our work (Section 6).2 RELATED WORKGesture generation is an area of research that is rapidly progress-ing. Previous studies have explored various approaches, initiallyfocusing on rule-based methods [ 10,29,34,40] and simple com-putational models [ 8,19], and later transitioning to early machinelearning techniques [ 12,23]. Currently, data-driven approachesthat integrate multiple modalities are being employed [ 4,41,59],advancing the field even further.Initially, gesture generation relied on manually crafted rules,either directly applied to specific avatars or used in conjunctionwith computational models that estimated appropriate gesturesbased on accompanying speech [ 10,19,29,34]. Although these ap-proaches generally struggled to produce natural and fluent gestures,they did enable the creation of complex representative gesturesthat are challenging to achieve with current data-driven methods[5, 6, 29, 34].During the beginning of data-driven gesture generation, thefocus was primarily on single modalities, where gestures weregenerated based on previous gesture frames [ 47], textual inputs[12,56], or audio-driven inputs [ 18,21,23]. Recent research haswitnessed a notable shift towards the generation of multi-modalco-speech gestures. This approach integrates gestures with audio,text, and other input modalities to produce varied and natural ges-tures. To accomplish this, advanced techniques such as generaladversarial networks (GANs) [ 3,41,52,54,55], cyclic functions[26], glow networks with invertible convolutions [ 24], variationalautoencoders [ 38,46], and deep reinforcement learning have beenused [ 46]. Recurrent neural networks, specifically Bi-DirectionalLong Short-Term Memory (Bi-Directional LSTM) and gated recur-rent unit (GRU) [ 13,25], have demonstrated the ability to generatenatural co-speech gestures [ 23,57], with various adaptations ofrecurrent architectures still being utilized in recent approaches[28,30,44,51]. Notably, the incorporation of style embeddings hasfacilitated the generation of distinct gesture styles for individualspeakers, thereby enabling diverse variations in gestures that aretailored to specific styles or speakers [21, 55].Recent advancements in the field of co-speech gesture generationcan be broadly categorized into two main approaches: retrieval-based methods and learning-based methods. Retrieval-based meth-ods involve the creation or learning of predefined sets of gestureunits and employ techniques such as keyword matching, semanticanalysis, and prosody analysis to retrieve corresponding gesturesfrom a comprehensive database [ 59]. Conversely, learning-basedmethods focus on training models to directly predict co-speechgestures using paired co-speech gesture data [ 55]. In recent stud-ies, some researchers have automated the creation of gesture unitdatabases by leveraging training data. These gesture units are thenemployed to train deep learning models, enabling the generationof new and varied co-speech gestures [ 38]. Both retrieval-basedand learning-based methods have proven to be effective in address-ing the inherent challenge of one-to-many mapping in co-speechgestures [ 11,32,44,55]. Notably, recent work on retrieval-basedmethods have even demonstrated superior performance comparedto ground truth gestures [58, 59].Simultaneously, significant progress has been made in the realmof reinforcement learning for robot control, particularly in theutilization of text and visual data as input. Within this context,text data is commonly employed either as action descriptions orgoal descriptions. Recently, successful approaches have emergedleveraging large language models (LLMs), which generate suitableplans for given goals [ 1] [42] [36]. These approaches harness LLMsto break down goal descriptions into a sequence of feasible low-level actions expressed in natural language. Subsequently, the actiondescriptions undergo embedding and serve as additional input toa reinforcement learning model. As an example, PaLM-SayCanincorporates the BC-Z network [ 27] to acquire low-level robotskills by providing visual data of the current state alongside textdescriptions of planned actions.Both the co-speech gesture generation and reinforcement imita-tion learning domains share a common goal: to generate elaborateand complex outputs by acquiring knowledge from a relatively lim-ited data set. As the imitation learning domain has made significantprogress in minimizing the data requirements for generating com-plex outputs, we believe that these achievements can be leveragedin the gesture generation domain. Therefore, we propose our novelframework, which is built on the foundation of imitation learn-ing, with the expectation of extending these advances to gesturegeneration.3 MODEL AND METHODOur framework builds upon the BC-Z architecture by Jang et al .[27], which is a flexible imitation learning system that can learnfrom both demonstrations and interventions for a given Zero-Shottask. Similar to our approach, the BC-Z architecture generates itsoutput in an autoregressive manner. However, given the uniquedomain and data characteristics of co-speech gestures, we havemade several modifications to the backbone of the BC-Z architec-ture to adapt it to our domain. In particular, we replaced the visionnetwork component of BC-Z with an attention-based network thattakes inputs from each modality ( Transformer Network ). In addition,we refined the Feature-wise Linear Modulation (FiLM) network[43], while retaining the fundamental concept of linear modulationapplied to the previous embedding. We refer to this modified FiLMarchitecture as the Feature Extraction Infusion Network (FEIN) . Ourframework takes audio, text, and speaker identity information fromboth the main agent and the interlocutor as input, alongside ges-tures from the interlocutor. To incorporate the temporal dimensionof the provided data, we employ positional encoding techniquesproposed by Vaswani et al . [49] . The transformer network receivesaudio features, text features, and speaker identity information fromboth the main agent and the interlocutor. The FEIN module alsoutilizes this data, with the addition of previous t-gestures fromboth the main agent and the interlocutor. The output of the trans-former network is then combined with features extracted from theFEIN module. The resulting embedding is further processed by aFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, Francejoint-specific Fully Connected Network (FCN). In addition to thearchitectural refinements, we utilize a Wasserstein GAN networkwith gradient divergence (WGAN-div) to improve the generationperformance of our framework [ 53]. To enhance the generationperformance of our framework we employ a discriminator withan FCN consisting of four linear layers, using the leaky ReLU acti-vation function [ 39]. Figure 1 gives an overview of our approach.In the following sections, we will provide a detailed description ofthe sub-modules of this framework, including the attention-basednetwork, FEIN, and the control network.3.1 Transformer BlocksThe presented framework incorporates a total of four transformerblocks, each possessing a consistent underlying architecture withdistinct parameters. These blocks comprise a multi-attention headfollowed by a feedforward network. To augment the capabilitiesof the feedforward network, we have introduced the Swish-GatedLinear Unit (SwiGLU) activation function [ 45] into the transformerblocks. As a result, the output yof the transformer blocks can becomputed as follows:MultiHead(Q,K,V)=Concat(head 1,..., headn)W0=x(1)f(x)=Swish(x·W1)⊗(x·W2) (2)y=f(x)·W3 (3)In the above equations, MultiHead denotes the multi-headed atten-tion layer, Swish represents the swish activation function and Wcorresponds to the weights of the linear functions.3.2 Transformer NetworkThe BC-Z framework initially relied on visual data, specificallyimages, to predict robot actions based on the current context. How-ever, our specific scenario lacks visual data, therefore requiringmodifications to the original architecture. To address this challenge,we adopt a transformer network, known for its capacity to modellong-term dependencies within structured input data. Central toour approach is the integration of audio and text input from boththe main agent and the interlocutor. Particularly, audio and textdata are processed independently. For each input modality, theframework computes an attention-based embedding, which learnsthe information and relationships present within the data. Theindividual attention-based embeddings obtained in the precedingstep are then aggregated and passed through an additional multi-attention mechanism, known as the ’Combined Transformer’. Thiscombination stage aims to identify and encapsulate important cuesrelated to the interplay between audio and text data. The resultantcomposite embedding effectively captures salient information anddata relationships, forming the fundamental basis for subsequentprocesses.3.3 Feature Extraction Infusion Network (FEIN)The FiLM network initially used in the BC-Z approach [ 27] requiresa task description and a human demonstration video as inputs.However, this approach isn’t directly applicable to our specificcase. Therefore, we designed a novel network architecture thatestablishes connections between the current audio-text inputs andthe gestures observed in the previous time window. Our dual goalswere to ensure coherent gesture generation by conditioning onprevious gestures and to inject additional contextual informationinto the current context.To achieve these goals, we use three separate stacks of 1D con-volutional layers to process the concatenated audio-text data andgesture information. This approach results in an embedding withan enriched spatial feature space, effectively capturing importantspatial relationships. For meaningful interplay within these embed-dings, a multi-head attention mechanism is incorporated. In thismechanism, the gesture embedding served as both query and value,while the audio-text embedding acts as the key. The goal of thisattention-based embedding is to learn complex dependencies be-tween gestures and audio-text data. The resulting attention-basedembedding then traverses two different feed-forward networks.Each network consisted of two linear layers with SiLU activationfunctions to promote non-linearity and information propagation. Anormalization layer completes each network, ensuring consistentand stable feature representations. This architectural configurationaims to facilitate the extraction of two essential feature networks:theγ-network and the β-network. These networks contain criticalinformation for the following control model. Within the controlnetwork architecture, the role of the γ-network is to provide timinginformation about previous gestures to the embedding. This helpsto maintain gesture consistency across time windows and counter-act fragmented gestures. On the other hand, the β-network, due toits additive nature, provides nuanced details to the embedding. Thisfeature allows the framework to capture subtle gestures that mightbe suppressed by the relatively coarse influence of the γ-network.3.4 Control NetworkThe embedding network, derived from the transformer network,along with the γandβnetworks from the FEIN model, serve as in-puts for the control network. This network architecture is foundedTable 1: The employed joints and their corresponding cate-gorizations within the control networkBody part number of joints jointsroot 3 b_rootupper body 21 b_spine0, b_spine1,b_spine2, b_spine3,b_neck0, b_headleft leg 6 b_l_upleg, b_l_legright leg 6 b_r_upleg, b_r_legleft arm 18 b_l_shoulder, b_l_arm,b_l_arm_twist, b_l_forearm,b_l_wrist_twist, b_l_wristleft hand 48 b_l_pinky1 ...3, b_l_ring1...3,b_l_middle1...3, b_l_index1 ...3,b_l_thumb0...3right arm 18 b_r_shoulder, b_r_arm,b_r_arm_twist, b_r_forearm,b_r_wrist_twist, b_r_wristright hand 48 b_r_thumb0 ...3, b_r_pinky1 ...3,b_r_middle1 ...3, b_r_ring1...3,b_r_index1...3ICMI ’23, October 9–13, 2023, Paris, France Harz et al.Figure 1: Top: The proposed FEIN model with the convolutional embedder, transformer block, and γ- andβ-FCN. Bottom:Transformer model with transformer blocks. Right: Control network with convolutional layers and γandβinfusion. All inputs(Gesture, Text, Audio, Speaker ID) consist of concatenated speaker and interlocutor information. The subscripts (0:99) and(100:199) denote distinct time windows represented by the input data.on the framework proposed by Jang et al . [27] . Initially, the em-bedding undergoes convolutional layer processing, resulting in adistilled embedding. Subsequently, this distilled embedding is en-riched through element-wise multiplication with the γ-networkoutput, which effectively integrates contextual information fromthe FEIN module. A subsequent convolutional layer processes themodulated output, combining information and yielding a trans-formed embedding. To further infuse the embedding with contex-tual cues, the transformed embedding is subject to element-wiseaddition with the β-network output. This step augments the embed-ding with supplementary contextual information. Following a finalconvolutional layer, the output is normalized, yielding a vector thatmerges current relevant features with essential contextual informa-tion. This integration is pivotal for generating coherent gestures,especially when considering the influence of preceding gestures.This processed vector then progresses through a sequence of fullyconnected networks (FCNs), with each FCN generating joint con-figurations for specific body parts, see Figure 1. This design impartsfine-grained control over individual body parts, thus facilitatingprecise manipulation of the model’s movements. The employmentof independent body-part-specific FCNs allows the framework toextract distinct features from the shared embedding, enabling abody-part-specific feature space.3.5 LossThe loss functions used in our framework are defined as follows.For the discriminator, the loss function is given by:LDwdiv(x,D(z))=Dis(x)−Dis(D(z))+δ|∇ˆxDis(ˆx)|p(4)Here,Disrepresents the discriminator function, xrepresents theoriginal dataset, and zrepresents the reconstructed data. The hy-perparameter δcontrols the magnitude of the divergence penalty.The first component of the loss, Dis(x)−Dis(D(z)), measuresthe dissimilarity between the real sample xand the output of ourframework, D(z). The second term, δ|∇ˆxDis(ˆx)|p, corresponds tothe divergence penalty, which encourages the generated sampleD(z)to closely resemble the distribution of real data. The generatorloss function is defined as:LGwdiv =Dis(D(z)) (5)This loss function aims to minimize the output of the discriminator,specifically the evaluation of Dis(D(z)).For behavior cloning, we employ a scaled version of the smoothedL1 loss, defined as:L1= 0.5θ(xθ−zθ)2β, if|x−z|<βθ|xθ−zθ|−0.5β,otherwise(6)This loss function is applied to the positions yandˆy, velocitiesy′and ˆy′, and accelerations y′′and ˆy′′. For this, the gradients arecalculated using the following formula:f(y)=2∑︁i=0λidiydti(7)Lbc=L1(f(yi),f(ˆyi)) (8)In these equations, yrepresents the true gestures, while ˆydenotesthe predicted gestures. The function f(y)calculates the gradientsof the variable or function ywith respect to time. The superscriptFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualization for the human-likeness study,provided by the GENEA Challenge 2023 [ 31]. Our frameworkis labeled SE. Median ratings are shown as red bars (with 0.05CI) and mean ratings as yellow diamonds (with 0.05 CI). Boxedges indicate the 25th and 75th percentiles. Whiskers cover95% of ratings for each condition.iindiydtiindicates the order of the derivative, ranging from 0 to 2.Theλiterms are scaling factors applied to the position, velocity,and acceleration losses.The termLbccorresponds to the loss function used for back-propagation. It is computed as the average of the individual losstermsLiover a dataset of size N. EachLimeasures the dissimilaritybetween the calculated gradients f(yi)and the target gradientsf(y∗i). Together, this loss ensures a temporal consistency of thegenerated gestures. The overall loss function used in our frame-work is a combination of the behavior cloning loss ( Lbc) and thediscriminator loss ( LGwdiv):Ltotal=Lbc+ 1n·λgLGwdiv(9)Here, 1n(s)is an indicator function defined as:1n(s)=(1,ifs%n=00,otherwise(10)This indicator function is used to determine when to apply thediscriminator loss. The parameter ncontrols the frequency of ap-plying the discriminator loss, and the scaling factor λgadjusts therelative importance of the discriminator loss compared to the be-havior cloning loss. By combining these components, the overallloss function guides the training process to improve the quality andconsistency of the generated gestures.3.6 Data ProcessingThe Genea Challenge 2023 provided an adapted version of theTalking With Hands 16.2M dataset [ 33], extended to a dyadic set-ting involving both a speaker and an interlocutor. This dataset en-compasses various modalities, including 3D full-body gesture data,audio data, text transcripts, and the speaker ID, all organized sepa-rately for the speaker and the interlocutor. As part of the challenge,the data was pre-separated into a training set of 371 sequences, avalidation set of 40 sequences, and a test set of 69 sequences. Eachsequence is approximately 1 minute in length, with a sample rateof 44100 Hz for the audio data. The gesture data was recorded at 30frames per second. Since the challenge required the generation ofthe speaker for the test set, this data was omitted.For our approach, we built upon the preprocessing pipeline es-tablished by Chang et al . [11] , making necessary modifications tosuit our specific requirements. For the audio data, we used multiplefeature extraction techniques to obtain three different features: MelFrequency Cepstral Coefficients (MFCC) with 40 dimensions, MelSpectrograms with 64 filter banks, and prosody features. All audiofeatures were computed using a window length of 4096 and a hoplength of 1470. Regarding the text transcripts, we used the FastTextword embedding model [ 7], which assigns a 300-dimensional vectorrepresentation to each word in the transcript. Since the temporalduration of each word is known, we generated a vector of size [se-quence length, 300] containing the corresponding word embeddingvector for each word’s duration. For the gesture data, we trans-formed the rotation of each body and finger joint in the BVH fileinto an exponential map representation [ 22]. This transformationresulted in 56 3D body joints for the gesture data.In the post-processing phase of the gesture output, we performedtwo operations. First, we clipped the angle of each generated bodyjoint to be within the range of the 2nd and 98th percentiles ofthe corresponding joint in the training data. This clipping stepensured that the generated angles remained within a reasonablerange. Afterward, we applied a rolling window calculation over 50frames to smooth the generated output and improve its temporalcoherence.3.7 Training procedureThe training procedure incorporates both behavior cloning and theWGAN architecture. In our setup, the network is responsible forgenerating gestures, while the discriminator is used to discriminatebetween the generated data and the original data. We chose a batchsize of 128 and a sequence length of 200 frames, which correspondsto two frame windows: t−1:=[0−99]andt0:=[100−199]. For theoptimizer, we use AdamW [ 37] with a weight decay parameter of0.01 for both the FEIN network and the discriminator. For the FEINmodel, we select a learning rate of 5e−5, while the discriminatorutilizes a learning rate of 1e−4. During training, we set the scalingfactorλgto0.05.The audio and text data used in training comes from t0, whilethe gesture data is sourced from t−1. After each prediction step, weoptimize the model using the loss function described in 9, and weoptimize the discriminator accordingly using its loss function, asdefined in 4. To prevent the network from consistently outperform-ing the discriminator and to stabilize the training, we apply the 5loss only every n=4steps. In total, we trained our framework for60 epochs. Every 10 epochs, we computed the validation loss andused the best-performing model to generate the evaluation data.ICMI ’23, October 9–13, 2023, Paris, France Harz et al.4 EVALUATIONDuring the training phase of the framework, we conducted a thor-ough analysis of various framework configurations, experimentingwith different numbers of transformer blocks and parameters. Wealso explored frameworks that generated gestures for both the mainagent and the interlocutor, as well as different input data for theFEIN model. Among these tested frameworks, many did not yieldsatisfactory results in terms of generating realistic and coherentgestures. As a result, we selected the framework proposed in thisstudy as the most suitable for our purposes.The main evaluation of the framework was performed along-side other approaches within the GENEA Challenge 2023. Sincethe evaluation of generated co-speech gestures is largely subjec-tive and objective measures that strongly correlate with subjec-tive evaluations are lacking [ 41], the evaluation focused primar-ily on subjective measures. Three specific aspects were evaluated:"Human-Likeness", "Appropriateness for Agent Speech", and "Ap-propriateness for the Interlocutor". To ensure anonymity, all pub-lished results were anonymized and assigned unique labels. Ourframework was labeled SE.4.1 Human-LikenessThe results of the Human-Likeness evaluation are shown in Figure2, illustrating the rating distribution obtained for the different ap-proaches. Figure 3 highlights the significant differences betweenthe competitors. Here, our framework receives significantly higherratings than the dyadic baseline ( BD), the monadic baseline ( BM),as well as the approaches SH,SD,SI,SK,SA,SB, and SC. On...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significant differences between all approaches, pro-vided by GENEA Challenge 2023 [ 31]. Our framework is la-beled SE. White indicates that the condition on the y-axis israted significantly higher than the one on the x-axis, whileblack indicates the opposite (y-rated below x). Gray indicatesno statistically significant difference at a significance levelofα=0.05, after applying the Holm-Bonferroni correction.the other hand, compared to the natural motion ( NA) and the ap-proaches SGandSF, our framework receives significantly lowerratings for human-likeness. There were no significant differences interms of human-likeness between our approach and the approachesSJandSL.A significant limitation of our approach, especially concerninghuman-like gesturing, was the lack of finger movement in all of thegenerated gestures. Although we trained our framework to produceoutput for the finger bones, the resulting gestures consistently ex-hibited a static finger position. Any changes observed in the fingerbones were primarily intended to prevent the introduction of arti-facts, rather than to add meaningful information to the generatedgestures.Another notable issue was the rapid change of poses in ourframework. Although the evaluation only captured footage fromthe knees up, to prevent any foot sliding from influencing the eval-uation, our model consistently exhibited movements that involveda redistribution of weight in the lower part of the torso. Such move-ments may have compromised the naturalness of the generatedgestures and led to a lower ranking in the human-likeness evalua-tion.4.2 AppropriatenessThe results of the speech appropriateness evaluation for the mainagent are depicted in Figure 4a. These ratings indicate the likelihoodof each framework being preferred with matching or mismatchinggestures. Our proposed framework, labeled SE, demonstrates sta-tistical significance in terms of speech appropriateness comparedto random chance. However, it is notably inferior to frameworkSG, which exhibits significantly better performance. Additionally,there is no significant difference between our framework and theapproaches SJ,SF,SK,SD,SI,SK,SB,SA, and SHin terms ofspeech appropriateness. The results of the appropriateness of ges-tures in response to the interlocutor are presented in Figure 4b.These ratings reflect the likelihood of each framework being pre-ferred with matching or mismatching gestures. Our frameworkdoes not exhibit statistical significance compared to random chancein this aspect. Our model does achieve a significantly higher meanappropriateness score (MAS) compared to frameworks SGandSH,and a significantly lower MAS compared to the natural motionNA. Furthermore, our model does not differ significantly from thedyadic and monadic baselines, as well as frameworks SA,SB,SL,SF,SI,SD,SJ,SC, and SK, in terms of appropriateness of gesturesin response to the interlocutor.The evaluation results presented here show a notable discrepancywhen compared to the results of the human similarity evaluation.While our framework is able to generate co-speech gestures thatare perceived as more human-like than the baseline used in thechallenge, this does not mean that the generated gestures are per-ceived as more appropriate for the given context than the baseline.Although the lack of finger bone information could be a possibleexplanation for this, we suggest that it is indicative of a generalproblem common to all current approaches to co-speech gesturegeneration. Current approaches excel at producing gestures that ap-pear natural and unobtrusive within a given conversation, which isalready a commendable achievement for human-agent interaction.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched (b) Appropriateness for the interlocutorFigure 4: Bar plots visualizing the response distribution in the appropriateness studies, provided by the GENEA Challenge 2023[31]. Our framework is labeled SE. The blue bar (bottom) represents responses where subjects preferred the matched motion,the light grey bar (middle) represents tied responses, and the red bar (top) represents responses preferring mismatched motion,with the height of each bar being proportional to the fraction of responses in each category. Lighter colors correspond to slightpreference, and darker colors to clear preference. On top of each bar is also a confidence interval for the mean appropriatenessscore, scaled to fit the current axes. The dotted black line indicates chance-level performance. Conditions are ordered by meanappropriateness score.However, this still falls well short of replicating human-to-humaninteraction. In human-to-human communication, individuals con-vey additional meaning through their gestures [ 14], which is basedon a shared mental model of the current conversation, themselves,and the conversation partner [ 15,16]. With this shared understand-ing, conversational partners can adapt their gestures to each otherand effectively convey meaningful information. Since our frame-work, and to the best of our knowledge all other available co-speechgesture approaches, lacks this essential insight into the conversa-tion partner, the generated gestures appear highly interchangeableto any human evaluator.Table 2: The Fréchet Gesture Distance (FGD) distance for eachablation modification, calculated both in the feature space(FGD F-space) and the raw data space (FGD R-space). For bothdistances, lower is better.Methods FGD F-space ↓FGD R-space↓natural motion 0.00 0.00w/o transformer 169.93 3334.14w/oγ-network 84.45 2667.33w/oβ-network 61.76 1879.82w/o audio 50.93 965.05w/o text 43.9 1099.48w/o main audio 34.98 758.62w/o inter text 31.26 767.28w/o main text 29.49 777.91w/o inter audio 28.54 680.66original 23.03 533.045 ABLATION STUDYIn order to assess the specific contributions of each componentwithin our proposed framework, we conducted an ablation study.First, different input configurations were investigated, includingthe exclusion of all textual input ("w/o text"), the exclusion of allaudio input ("w/o audio"), and the selective removal of these modal-ities for the main speaker ("w/o main audio" and "w/o main text")as well as for the interlocutor ("w/o inter audio" and "w/o intertext"). Furthermore, different architectural configurations were ex-plored, including deactivation of the output of the combined trans-former ("w/o transformer"), deactivation of the β-network ("w/oβ-network"), and exclusion of the multiplication process involvingtheγ-network (referred to as "w/o γ-network"). The distinction inthe generated gestures was measured by using the Fréchet GestureDistance (FGD), as defined by Yoon et al . [55] , for each modification.The evaluation of this distance was performed both in the featurespace of the autoencoder network given by the GENEA 2023 chal-lenge and in the context of the raw data space, similar to Ahujaet al. [2]. Detailed results are presented in Table 2. We make anexample video of all modifications available online1.As can be expected, each modification of the framework leadsto an increase in the FGD, both in the feature space and in theraw data space. In terms of the modality-specific inputs associatedwith the interactive partner, all modifications lead to a comparableincrease in the FGD. In particular, the removal of the interlocutor’saudio produced the smallest change, while the exclusion of the mainspeaker’s audio produced the largest change. The complete removalof both textual and audio information led to a sharp increase inFGD. Visual inspection of the generated gestures revealed instancesof elaborate but misaligned gestures in cases of audio removal,1https://vimeo.com/853326587ICMI ’23, October 9–13, 2023, Paris, France Harz et al.whereas small and infrequent gestures were observed followingtext removal.Looking at the modifications of the architectural configurations,it becomes clear that the transformer model has successfully learnedto generate the gestures since the removal leads to strongly de-graded performance and the largest increase in FGD of all modifi-cations. Similarly, the removal of the βnetwork and the γnetworkleads to a deterioration of the performance. Looking at the visualresults of the βnetwork, the gestures still show a natural fluidmovement but are mainly concentrated in front of the chest and donot show any obvious finger movement. On the other hand, the vi-sual results from the γnetwork show fast, erratic movements of thehands and upper body, with some unnatural poses. These resultssupport our intended design choices, with the γ-network focusingmainly on smoothing the temporal information of the generatedgestures, while the β-network refines the generated gestures toallow for more elaborate hand movements.6 CONCLUSIONOur framework presents a novel approach to co-speech gesturegeneration inspired by robotic imitation learning and based on abehavior cloning architecture. We combine a transformer architec-ture with a generative adversarial network to create a model thatranks in the top half of the GENEA Challenge 2023 [ 31]. Althoughthe model did not achieve results comparable to natural motion,we believe that additional training time and more sophisticatedinput segmentation could lead to improved results. An effectivestrategy may involve the use of only historical data in the FEINmodel to ensure that the input data consists only of aligned gesture,audio, and text data. In addition, the use of a finer-grained controlnetwork that distinguishes separate body parts, such as hands andarms, could have the potential to improve the generated gestures.Increasing the feedback provided by the discriminator model inlater stages of training is another way to improve performance,as the discriminator shows diminishing returns as training pro-gresses. Additionally, selectively freezing certain models within ourframework during later stages of training to focus on refining ges-tures could lead to performance improvements. Similarly, exploringalternative inference methods, such as predicting one frame at atime or adjusting the time window, may also help to improve thecapabilities of the framework. In conclusion, we believe that ourarchitecture demonstrates the potential to generate gestures thatexhibit some human-like characteristics, and we believe that thereare several ways in which our framework could be improved in thefuture. Finally, we hypothesize that the integration of frameworksintroduced in multimodal robot learning could further enhance theperformance of future gesture generation models.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, France
xihRH-X6I2
A new model to generate different joints
6: Marginally above acceptance threshold
Paper Summary: The paper presents a new model that employs modules such as BC-Z (Control Network) and FEIN to generate different joints. However, the paper does not provide detailed explanations about the specific roles and effects of these modules, nor does it conduct ablation experiments to demonstrate the effectiveness of each module. Relevance: The topic of the paper is highly relevant to the conference theme, exploring the application of artificial intelligence in generating specific joints. Significance: The research in this paper is of great importance for understanding and improving joint generation models. However, due to the lack of sufficient empirical evidence, the significance of its contribution needs further verification. Paper Strengths: The paper proposes a new model, attempting to generate different joints. The paper provides detailed descriptions of the design and implementation of the model. Paper Weaknesses: The model in the paper seems to be able to generate directly in one step, without the need to generate different joints, which questions the appropriateness of its being called a "control model". The paper does not provide detailed explanations about the specific roles and effects of modules like BC-Z (Control Network) and FEIN, nor does it conduct ablation experiments to demonstrate the effectiveness of each module. The post-processing (trimming joints) part in the paper is not well explained, and it is unclear whether it is because the generated results do not meet expectations. There are some issues in the evaluation part of the paper, such as contradictions in the descriptions at L623 and L626. There are some issues with the writing of the paper, such as unreasonable layout, lack of equation numbers, inconsistent fonts, and some expressions that are confusing. Further Comments: The specific joints generated in Figure 1 should be listed, the original BC-Z utilizes no use of loss (Huber loss, log loss) for XYZ, rotation, Gripper, and this work is just generating for different joints, I don't know if it works here, it seems like it can be generated directly in one step, calling it a control model is a bit inappropriate Why post-processing (pruning the joints), is it because the generated results are not as expected? The model should be able to learn on its own that the data should lie between the distributions of the training data. The modules used are BC-Z (Control Network), FEIN, etc. I would like to know if each of these modules is useful and which one has the greatest impact on the results, it would be useful if you could add ablation experiments. Please check that your assessment is written correctly, L623 is correct, but L626 is again written as not significantly different from SF, which is clearly evident from Figures 2 and 3. Does the method use hand joints? L506 writes hand gestures, but does L628 mean that hand movements cannot be generated correctly? Writing problem: Why is there a large blank space at the bottom left of the first page? Suggest adjusting the layout; the formula near L247 is not labeled, as well as MultiHead and Swish fonts are not the same; L520, lowercase T after comma, as well as the puzzling phrase "The network is responsible for generating gestures. "This is very confusing, and the way it is written in many places could be optimized; in Figure 1, the frames are 1:100 and 100:200, and in L552 they are 0-99 and 100-199, which I would suggest to keep the same; in L560, the 6 should be changed to Equation (6), and the 2 in L 563 is the same. And some descriptions in the paper need further clarification, such as "What does 'a redistribution of weight in the lower part of the torso' in L663 mean?".
4: The reviewer is confident but not absolutely certain that the evaluation is correct
FovoQL3nygw
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation
["Leon Harz", "Hendric Vo\u00df", "Stefan Kopp"]
Human communication relies on multiple modalities such as verbal expressions, facial cues, and bodily gestures. Developing computational approaches to process and generate these multimodal signals is critical for seamless human-agent interaction. A particular challenge is the generation of co-speech gestures due to the large variability and number of gestures that can accompany a verbal utterance, leading to a one-to-many mapping problem. This paper presents an approach based on a Feature Extraction Infusion Network (FEIN-Z) that adopts insights from robot imitation learning and applies them to co-speech gesture generation. Building on the BC-Z architecture, our framework combines transformer architectures and Wasserstein generative adversarial networks. We describe the FEIN-Z methodology and evaluation results obtained within the GENEA Challenge 2023, demonstrating good results and significant improvements in human-likeness over the GENEA baseline. We discuss potential areas for improvement, such as refining input segmentation, employing more fine-grained control networks, and exploring alternative inference methods.
["machine learning", "deep learning", "co-speech gesture generation", "gesture synthesis", "multimodal data", "transformer", "behavior cloning", "reinforcement learning"]
ABSTRACTHuman communication relies on multiple modalities such as verbalexpressions, facial cues, and bodily gestures. Developing compu-tational approaches to process and generate these multimodal sig-nals is critical for seamless human-agent interaction. A particularchallenge is the generation of co-speech gestures due to the largevariability and number of gestures that can accompany a verbalutterance, leading to a one-to-many mapping problem. This paperpresents an approach based on a Feature Extraction Infusion Net-work (FEIN-Z) that adopts insights from robot imitation learningand applies them to co-speech gesture generation. Building on theBC-Z architecture, our framework combines transformer architec-tures and Wasserstein generative adversarial networks. We describethe FEIN-Z methodology and evaluation results obtained within theGENEA Challenge 2023, demonstrating good results and significantimprovements in human-likeness over the GENEA baseline. Wediscuss potential areas for improvement, such as refining inputsegmentation, employing more fine-grained control networks, andexploring alternative inference methods.CCS CONCEPTS•Human-centered computing →Interactive systems andtools ;Empirical studies in interaction design ;HCI theory, conceptsand models ;•Computing methodologies →Neural networks ;Learning latent representations ;Unsupervised learning .KEYWORDSmachine learning; deep learning; co-speech gesture generation;gesture synthesis; multimodal data; transformer; behavior cloning;reinforcement learningACM Reference Format:Leon Harz∗, Hendric Voß∗, and Stefan Kopp. 2023. FEIN-Z: Autoregres-sive Behavior Cloning for Speech-Driven Gesture Generation. In INTER-NATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23),October 9–13, 2023, Paris, France. ACM, New York, NY, USA, 10 pages.https://doi.org/10.1145/3577190.3616115∗Both authors contributed equally to the paperPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.36161151 INTRODUCTIONHuman communication is a multifaceted process that relies onvarious modalities, including verbal expressions, facial cues, andbodily gestures. Combining these modalities allows us to conveycomplex messages and facilitate meaningful interactions [ 9,50].Consequently, the development of machines that can process andgenerate these multi-modal signals is crucial to enable seamlessinteraction between humans and agents. A key aspect that makesgesture generation particularly challenging is the existence of multi-ple valid gestures for a given interaction. Unlike verbal expressions,which often have a single intended meaning, gestures can conveydifferent nuances and interpretations, leading to a one-to-manymapping problem [ 41]. Capturing this inherent variability and gen-erating contextually appropriate gestures is a complex task thatrequires careful consideration. The importance of gesture genera-tion extends beyond research to practical applications in real-worldscenarios and virtual environments. In human-robot interaction,gestures play a crucial role in enhancing communication and fa-cilitating natural interactions between humans and robotic agents[56]. Similarly, in virtual reality, realistic and expressive gesturescontribute to immersion and engagement, enabling more intuitiveand compelling experiences [ 35]. Therefore, the development ofrobust and effective gesture-generation methods has great potentialfor improving various areas of human-machine interaction.In this work, we propose the FEIN-Z framework, a combinationof the proposed Feature Extraction Infusion Network (FEIN) and thezero-shot learning aspect of the BC-Z architecture (Z). Inspired byrecent achievements in robotic imitation learning, we extend theBC-Z approach [ 27] intended to generalize robotic manipulationtasks to unseen problems, to the co-speech gesture generation do-main. As transformer architectures have shown promising resultsin a wide variety of domains [ 17,48], including co-speech gesturegeneration [ 38], we replace and extend multiple components of theoriginal BC-Z approach with a transformer architecture. Gener-ative adversarial networks (GAN) are widely used in the roboticand co-speech gesture generation domain [ 20,52]. Building uponthe insight gained from recent approaches [ 52], we propose to usea Wasserstein generative adversarial networks (WGAN) with aWasserstein divergence objective to guide our framework to gener-ate natural and expressive gestures. The released evaluation resultsof the GENEA Challenge 2023 show that our framework outper-forms the challenge baseline with regard to human-likeness bya significant margin and ranks in the top half of all evaluated ap-proaches [ 31]. In the next sections, we will first give a brief overviewof the existing work and current achievements of co-speech gestureICMI ’23, October 9–13, 2023, Paris, France Harz et al.generation (Section 2), before detailing the proposed FEIN-Z archi-tecture, the individual components, the data processing, and ourtraining procedure (Section 3). Finally, we will discuss the results ofthe performed evaluation (Section 4) and conclude with an outlookfor possible improvements of our work (Section 6).2 RELATED WORKGesture generation is an area of research that is rapidly progress-ing. Previous studies have explored various approaches, initiallyfocusing on rule-based methods [ 10,29,34,40] and simple com-putational models [ 8,19], and later transitioning to early machinelearning techniques [ 12,23]. Currently, data-driven approachesthat integrate multiple modalities are being employed [ 4,41,59],advancing the field even further.Initially, gesture generation relied on manually crafted rules,either directly applied to specific avatars or used in conjunctionwith computational models that estimated appropriate gesturesbased on accompanying speech [ 10,19,29,34]. Although these ap-proaches generally struggled to produce natural and fluent gestures,they did enable the creation of complex representative gesturesthat are challenging to achieve with current data-driven methods[5, 6, 29, 34].During the beginning of data-driven gesture generation, thefocus was primarily on single modalities, where gestures weregenerated based on previous gesture frames [ 47], textual inputs[12,56], or audio-driven inputs [ 18,21,23]. Recent research haswitnessed a notable shift towards the generation of multi-modalco-speech gestures. This approach integrates gestures with audio,text, and other input modalities to produce varied and natural ges-tures. To accomplish this, advanced techniques such as generaladversarial networks (GANs) [ 3,41,52,54,55], cyclic functions[26], glow networks with invertible convolutions [ 24], variationalautoencoders [ 38,46], and deep reinforcement learning have beenused [ 46]. Recurrent neural networks, specifically Bi-DirectionalLong Short-Term Memory (Bi-Directional LSTM) and gated recur-rent unit (GRU) [ 13,25], have demonstrated the ability to generatenatural co-speech gestures [ 23,57], with various adaptations ofrecurrent architectures still being utilized in recent approaches[28,30,44,51]. Notably, the incorporation of style embeddings hasfacilitated the generation of distinct gesture styles for individualspeakers, thereby enabling diverse variations in gestures that aretailored to specific styles or speakers [21, 55].Recent advancements in the field of co-speech gesture generationcan be broadly categorized into two main approaches: retrieval-based methods and learning-based methods. Retrieval-based meth-ods involve the creation or learning of predefined sets of gestureunits and employ techniques such as keyword matching, semanticanalysis, and prosody analysis to retrieve corresponding gesturesfrom a comprehensive database [ 59]. Conversely, learning-basedmethods focus on training models to directly predict co-speechgestures using paired co-speech gesture data [ 55]. In recent stud-ies, some researchers have automated the creation of gesture unitdatabases by leveraging training data. These gesture units are thenemployed to train deep learning models, enabling the generationof new and varied co-speech gestures [ 38]. Both retrieval-basedand learning-based methods have proven to be effective in address-ing the inherent challenge of one-to-many mapping in co-speechgestures [ 11,32,44,55]. Notably, recent work on retrieval-basedmethods have even demonstrated superior performance comparedto ground truth gestures [58, 59].Simultaneously, significant progress has been made in the realmof reinforcement learning for robot control, particularly in theutilization of text and visual data as input. Within this context,text data is commonly employed either as action descriptions orgoal descriptions. Recently, successful approaches have emergedleveraging large language models (LLMs), which generate suitableplans for given goals [ 1] [42] [36]. These approaches harness LLMsto break down goal descriptions into a sequence of feasible low-level actions expressed in natural language. Subsequently, the actiondescriptions undergo embedding and serve as additional input toa reinforcement learning model. As an example, PaLM-SayCanincorporates the BC-Z network [ 27] to acquire low-level robotskills by providing visual data of the current state alongside textdescriptions of planned actions.Both the co-speech gesture generation and reinforcement imita-tion learning domains share a common goal: to generate elaborateand complex outputs by acquiring knowledge from a relatively lim-ited data set. As the imitation learning domain has made significantprogress in minimizing the data requirements for generating com-plex outputs, we believe that these achievements can be leveragedin the gesture generation domain. Therefore, we propose our novelframework, which is built on the foundation of imitation learn-ing, with the expectation of extending these advances to gesturegeneration.3 MODEL AND METHODOur framework builds upon the BC-Z architecture by Jang et al .[27], which is a flexible imitation learning system that can learnfrom both demonstrations and interventions for a given Zero-Shottask. Similar to our approach, the BC-Z architecture generates itsoutput in an autoregressive manner. However, given the uniquedomain and data characteristics of co-speech gestures, we havemade several modifications to the backbone of the BC-Z architec-ture to adapt it to our domain. In particular, we replaced the visionnetwork component of BC-Z with an attention-based network thattakes inputs from each modality ( Transformer Network ). In addition,we refined the Feature-wise Linear Modulation (FiLM) network[43], while retaining the fundamental concept of linear modulationapplied to the previous embedding. We refer to this modified FiLMarchitecture as the Feature Extraction Infusion Network (FEIN) . Ourframework takes audio, text, and speaker identity information fromboth the main agent and the interlocutor as input, alongside ges-tures from the interlocutor. To incorporate the temporal dimensionof the provided data, we employ positional encoding techniquesproposed by Vaswani et al . [49] . The transformer network receivesaudio features, text features, and speaker identity information fromboth the main agent and the interlocutor. The FEIN module alsoutilizes this data, with the addition of previous t-gestures fromboth the main agent and the interlocutor. The output of the trans-former network is then combined with features extracted from theFEIN module. The resulting embedding is further processed by aFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, Francejoint-specific Fully Connected Network (FCN). In addition to thearchitectural refinements, we utilize a Wasserstein GAN networkwith gradient divergence (WGAN-div) to improve the generationperformance of our framework [ 53]. To enhance the generationperformance of our framework we employ a discriminator withan FCN consisting of four linear layers, using the leaky ReLU acti-vation function [ 39]. Figure 1 gives an overview of our approach.In the following sections, we will provide a detailed description ofthe sub-modules of this framework, including the attention-basednetwork, FEIN, and the control network.3.1 Transformer BlocksThe presented framework incorporates a total of four transformerblocks, each possessing a consistent underlying architecture withdistinct parameters. These blocks comprise a multi-attention headfollowed by a feedforward network. To augment the capabilitiesof the feedforward network, we have introduced the Swish-GatedLinear Unit (SwiGLU) activation function [ 45] into the transformerblocks. As a result, the output yof the transformer blocks can becomputed as follows:MultiHead(Q,K,V)=Concat(head 1,..., headn)W0=x(1)f(x)=Swish(x·W1)⊗(x·W2) (2)y=f(x)·W3 (3)In the above equations, MultiHead denotes the multi-headed atten-tion layer, Swish represents the swish activation function and Wcorresponds to the weights of the linear functions.3.2 Transformer NetworkThe BC-Z framework initially relied on visual data, specificallyimages, to predict robot actions based on the current context. How-ever, our specific scenario lacks visual data, therefore requiringmodifications to the original architecture. To address this challenge,we adopt a transformer network, known for its capacity to modellong-term dependencies within structured input data. Central toour approach is the integration of audio and text input from boththe main agent and the interlocutor. Particularly, audio and textdata are processed independently. For each input modality, theframework computes an attention-based embedding, which learnsthe information and relationships present within the data. Theindividual attention-based embeddings obtained in the precedingstep are then aggregated and passed through an additional multi-attention mechanism, known as the ’Combined Transformer’. Thiscombination stage aims to identify and encapsulate important cuesrelated to the interplay between audio and text data. The resultantcomposite embedding effectively captures salient information anddata relationships, forming the fundamental basis for subsequentprocesses.3.3 Feature Extraction Infusion Network (FEIN)The FiLM network initially used in the BC-Z approach [ 27] requiresa task description and a human demonstration video as inputs.However, this approach isn’t directly applicable to our specificcase. Therefore, we designed a novel network architecture thatestablishes connections between the current audio-text inputs andthe gestures observed in the previous time window. Our dual goalswere to ensure coherent gesture generation by conditioning onprevious gestures and to inject additional contextual informationinto the current context.To achieve these goals, we use three separate stacks of 1D con-volutional layers to process the concatenated audio-text data andgesture information. This approach results in an embedding withan enriched spatial feature space, effectively capturing importantspatial relationships. For meaningful interplay within these embed-dings, a multi-head attention mechanism is incorporated. In thismechanism, the gesture embedding served as both query and value,while the audio-text embedding acts as the key. The goal of thisattention-based embedding is to learn complex dependencies be-tween gestures and audio-text data. The resulting attention-basedembedding then traverses two different feed-forward networks.Each network consisted of two linear layers with SiLU activationfunctions to promote non-linearity and information propagation. Anormalization layer completes each network, ensuring consistentand stable feature representations. This architectural configurationaims to facilitate the extraction of two essential feature networks:theγ-network and the β-network. These networks contain criticalinformation for the following control model. Within the controlnetwork architecture, the role of the γ-network is to provide timinginformation about previous gestures to the embedding. This helpsto maintain gesture consistency across time windows and counter-act fragmented gestures. On the other hand, the β-network, due toits additive nature, provides nuanced details to the embedding. Thisfeature allows the framework to capture subtle gestures that mightbe suppressed by the relatively coarse influence of the γ-network.3.4 Control NetworkThe embedding network, derived from the transformer network,along with the γandβnetworks from the FEIN model, serve as in-puts for the control network. This network architecture is foundedTable 1: The employed joints and their corresponding cate-gorizations within the control networkBody part number of joints jointsroot 3 b_rootupper body 21 b_spine0, b_spine1,b_spine2, b_spine3,b_neck0, b_headleft leg 6 b_l_upleg, b_l_legright leg 6 b_r_upleg, b_r_legleft arm 18 b_l_shoulder, b_l_arm,b_l_arm_twist, b_l_forearm,b_l_wrist_twist, b_l_wristleft hand 48 b_l_pinky1 ...3, b_l_ring1...3,b_l_middle1...3, b_l_index1 ...3,b_l_thumb0...3right arm 18 b_r_shoulder, b_r_arm,b_r_arm_twist, b_r_forearm,b_r_wrist_twist, b_r_wristright hand 48 b_r_thumb0 ...3, b_r_pinky1 ...3,b_r_middle1 ...3, b_r_ring1...3,b_r_index1...3ICMI ’23, October 9–13, 2023, Paris, France Harz et al.Figure 1: Top: The proposed FEIN model with the convolutional embedder, transformer block, and γ- andβ-FCN. Bottom:Transformer model with transformer blocks. Right: Control network with convolutional layers and γandβinfusion. All inputs(Gesture, Text, Audio, Speaker ID) consist of concatenated speaker and interlocutor information. The subscripts (0:99) and(100:199) denote distinct time windows represented by the input data.on the framework proposed by Jang et al . [27] . Initially, the em-bedding undergoes convolutional layer processing, resulting in adistilled embedding. Subsequently, this distilled embedding is en-riched through element-wise multiplication with the γ-networkoutput, which effectively integrates contextual information fromthe FEIN module. A subsequent convolutional layer processes themodulated output, combining information and yielding a trans-formed embedding. To further infuse the embedding with contex-tual cues, the transformed embedding is subject to element-wiseaddition with the β-network output. This step augments the embed-ding with supplementary contextual information. Following a finalconvolutional layer, the output is normalized, yielding a vector thatmerges current relevant features with essential contextual informa-tion. This integration is pivotal for generating coherent gestures,especially when considering the influence of preceding gestures.This processed vector then progresses through a sequence of fullyconnected networks (FCNs), with each FCN generating joint con-figurations for specific body parts, see Figure 1. This design impartsfine-grained control over individual body parts, thus facilitatingprecise manipulation of the model’s movements. The employmentof independent body-part-specific FCNs allows the framework toextract distinct features from the shared embedding, enabling abody-part-specific feature space.3.5 LossThe loss functions used in our framework are defined as follows.For the discriminator, the loss function is given by:LDwdiv(x,D(z))=Dis(x)−Dis(D(z))+δ|∇ˆxDis(ˆx)|p(4)Here,Disrepresents the discriminator function, xrepresents theoriginal dataset, and zrepresents the reconstructed data. The hy-perparameter δcontrols the magnitude of the divergence penalty.The first component of the loss, Dis(x)−Dis(D(z)), measuresthe dissimilarity between the real sample xand the output of ourframework, D(z). The second term, δ|∇ˆxDis(ˆx)|p, corresponds tothe divergence penalty, which encourages the generated sampleD(z)to closely resemble the distribution of real data. The generatorloss function is defined as:LGwdiv =Dis(D(z)) (5)This loss function aims to minimize the output of the discriminator,specifically the evaluation of Dis(D(z)).For behavior cloning, we employ a scaled version of the smoothedL1 loss, defined as:L1= 0.5θ(xθ−zθ)2β, if|x−z|<βθ|xθ−zθ|−0.5β,otherwise(6)This loss function is applied to the positions yandˆy, velocitiesy′and ˆy′, and accelerations y′′and ˆy′′. For this, the gradients arecalculated using the following formula:f(y)=2∑︁i=0λidiydti(7)Lbc=L1(f(yi),f(ˆyi)) (8)In these equations, yrepresents the true gestures, while ˆydenotesthe predicted gestures. The function f(y)calculates the gradientsof the variable or function ywith respect to time. The superscriptFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualization for the human-likeness study,provided by the GENEA Challenge 2023 [ 31]. Our frameworkis labeled SE. Median ratings are shown as red bars (with 0.05CI) and mean ratings as yellow diamonds (with 0.05 CI). Boxedges indicate the 25th and 75th percentiles. Whiskers cover95% of ratings for each condition.iindiydtiindicates the order of the derivative, ranging from 0 to 2.Theλiterms are scaling factors applied to the position, velocity,and acceleration losses.The termLbccorresponds to the loss function used for back-propagation. It is computed as the average of the individual losstermsLiover a dataset of size N. EachLimeasures the dissimilaritybetween the calculated gradients f(yi)and the target gradientsf(y∗i). Together, this loss ensures a temporal consistency of thegenerated gestures. The overall loss function used in our frame-work is a combination of the behavior cloning loss ( Lbc) and thediscriminator loss ( LGwdiv):Ltotal=Lbc+ 1n·λgLGwdiv(9)Here, 1n(s)is an indicator function defined as:1n(s)=(1,ifs%n=00,otherwise(10)This indicator function is used to determine when to apply thediscriminator loss. The parameter ncontrols the frequency of ap-plying the discriminator loss, and the scaling factor λgadjusts therelative importance of the discriminator loss compared to the be-havior cloning loss. By combining these components, the overallloss function guides the training process to improve the quality andconsistency of the generated gestures.3.6 Data ProcessingThe Genea Challenge 2023 provided an adapted version of theTalking With Hands 16.2M dataset [ 33], extended to a dyadic set-ting involving both a speaker and an interlocutor. This dataset en-compasses various modalities, including 3D full-body gesture data,audio data, text transcripts, and the speaker ID, all organized sepa-rately for the speaker and the interlocutor. As part of the challenge,the data was pre-separated into a training set of 371 sequences, avalidation set of 40 sequences, and a test set of 69 sequences. Eachsequence is approximately 1 minute in length, with a sample rateof 44100 Hz for the audio data. The gesture data was recorded at 30frames per second. Since the challenge required the generation ofthe speaker for the test set, this data was omitted.For our approach, we built upon the preprocessing pipeline es-tablished by Chang et al . [11] , making necessary modifications tosuit our specific requirements. For the audio data, we used multiplefeature extraction techniques to obtain three different features: MelFrequency Cepstral Coefficients (MFCC) with 40 dimensions, MelSpectrograms with 64 filter banks, and prosody features. All audiofeatures were computed using a window length of 4096 and a hoplength of 1470. Regarding the text transcripts, we used the FastTextword embedding model [ 7], which assigns a 300-dimensional vectorrepresentation to each word in the transcript. Since the temporalduration of each word is known, we generated a vector of size [se-quence length, 300] containing the corresponding word embeddingvector for each word’s duration. For the gesture data, we trans-formed the rotation of each body and finger joint in the BVH fileinto an exponential map representation [ 22]. This transformationresulted in 56 3D body joints for the gesture data.In the post-processing phase of the gesture output, we performedtwo operations. First, we clipped the angle of each generated bodyjoint to be within the range of the 2nd and 98th percentiles ofthe corresponding joint in the training data. This clipping stepensured that the generated angles remained within a reasonablerange. Afterward, we applied a rolling window calculation over 50frames to smooth the generated output and improve its temporalcoherence.3.7 Training procedureThe training procedure incorporates both behavior cloning and theWGAN architecture. In our setup, the network is responsible forgenerating gestures, while the discriminator is used to discriminatebetween the generated data and the original data. We chose a batchsize of 128 and a sequence length of 200 frames, which correspondsto two frame windows: t−1:=[0−99]andt0:=[100−199]. For theoptimizer, we use AdamW [ 37] with a weight decay parameter of0.01 for both the FEIN network and the discriminator. For the FEINmodel, we select a learning rate of 5e−5, while the discriminatorutilizes a learning rate of 1e−4. During training, we set the scalingfactorλgto0.05.The audio and text data used in training comes from t0, whilethe gesture data is sourced from t−1. After each prediction step, weoptimize the model using the loss function described in 9, and weoptimize the discriminator accordingly using its loss function, asdefined in 4. To prevent the network from consistently outperform-ing the discriminator and to stabilize the training, we apply the 5loss only every n=4steps. In total, we trained our framework for60 epochs. Every 10 epochs, we computed the validation loss andused the best-performing model to generate the evaluation data.ICMI ’23, October 9–13, 2023, Paris, France Harz et al.4 EVALUATIONDuring the training phase of the framework, we conducted a thor-ough analysis of various framework configurations, experimentingwith different numbers of transformer blocks and parameters. Wealso explored frameworks that generated gestures for both the mainagent and the interlocutor, as well as different input data for theFEIN model. Among these tested frameworks, many did not yieldsatisfactory results in terms of generating realistic and coherentgestures. As a result, we selected the framework proposed in thisstudy as the most suitable for our purposes.The main evaluation of the framework was performed along-side other approaches within the GENEA Challenge 2023. Sincethe evaluation of generated co-speech gestures is largely subjec-tive and objective measures that strongly correlate with subjec-tive evaluations are lacking [ 41], the evaluation focused primar-ily on subjective measures. Three specific aspects were evaluated:"Human-Likeness", "Appropriateness for Agent Speech", and "Ap-propriateness for the Interlocutor". To ensure anonymity, all pub-lished results were anonymized and assigned unique labels. Ourframework was labeled SE.4.1 Human-LikenessThe results of the Human-Likeness evaluation are shown in Figure2, illustrating the rating distribution obtained for the different ap-proaches. Figure 3 highlights the significant differences betweenthe competitors. Here, our framework receives significantly higherratings than the dyadic baseline ( BD), the monadic baseline ( BM),as well as the approaches SH,SD,SI,SK,SA,SB, and SC. On...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significant differences between all approaches, pro-vided by GENEA Challenge 2023 [ 31]. Our framework is la-beled SE. White indicates that the condition on the y-axis israted significantly higher than the one on the x-axis, whileblack indicates the opposite (y-rated below x). Gray indicatesno statistically significant difference at a significance levelofα=0.05, after applying the Holm-Bonferroni correction.the other hand, compared to the natural motion ( NA) and the ap-proaches SGandSF, our framework receives significantly lowerratings for human-likeness. There were no significant differences interms of human-likeness between our approach and the approachesSJandSL.A significant limitation of our approach, especially concerninghuman-like gesturing, was the lack of finger movement in all of thegenerated gestures. Although we trained our framework to produceoutput for the finger bones, the resulting gestures consistently ex-hibited a static finger position. Any changes observed in the fingerbones were primarily intended to prevent the introduction of arti-facts, rather than to add meaningful information to the generatedgestures.Another notable issue was the rapid change of poses in ourframework. Although the evaluation only captured footage fromthe knees up, to prevent any foot sliding from influencing the eval-uation, our model consistently exhibited movements that involveda redistribution of weight in the lower part of the torso. Such move-ments may have compromised the naturalness of the generatedgestures and led to a lower ranking in the human-likeness evalua-tion.4.2 AppropriatenessThe results of the speech appropriateness evaluation for the mainagent are depicted in Figure 4a. These ratings indicate the likelihoodof each framework being preferred with matching or mismatchinggestures. Our proposed framework, labeled SE, demonstrates sta-tistical significance in terms of speech appropriateness comparedto random chance. However, it is notably inferior to frameworkSG, which exhibits significantly better performance. Additionally,there is no significant difference between our framework and theapproaches SJ,SF,SK,SD,SI,SK,SB,SA, and SHin terms ofspeech appropriateness. The results of the appropriateness of ges-tures in response to the interlocutor are presented in Figure 4b.These ratings reflect the likelihood of each framework being pre-ferred with matching or mismatching gestures. Our frameworkdoes not exhibit statistical significance compared to random chancein this aspect. Our model does achieve a significantly higher meanappropriateness score (MAS) compared to frameworks SGandSH,and a significantly lower MAS compared to the natural motionNA. Furthermore, our model does not differ significantly from thedyadic and monadic baselines, as well as frameworks SA,SB,SL,SF,SI,SD,SJ,SC, and SK, in terms of appropriateness of gesturesin response to the interlocutor.The evaluation results presented here show a notable discrepancywhen compared to the results of the human similarity evaluation.While our framework is able to generate co-speech gestures thatare perceived as more human-like than the baseline used in thechallenge, this does not mean that the generated gestures are per-ceived as more appropriate for the given context than the baseline.Although the lack of finger bone information could be a possibleexplanation for this, we suggest that it is indicative of a generalproblem common to all current approaches to co-speech gesturegeneration. Current approaches excel at producing gestures that ap-pear natural and unobtrusive within a given conversation, which isalready a commendable achievement for human-agent interaction.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched (b) Appropriateness for the interlocutorFigure 4: Bar plots visualizing the response distribution in the appropriateness studies, provided by the GENEA Challenge 2023[31]. Our framework is labeled SE. The blue bar (bottom) represents responses where subjects preferred the matched motion,the light grey bar (middle) represents tied responses, and the red bar (top) represents responses preferring mismatched motion,with the height of each bar being proportional to the fraction of responses in each category. Lighter colors correspond to slightpreference, and darker colors to clear preference. On top of each bar is also a confidence interval for the mean appropriatenessscore, scaled to fit the current axes. The dotted black line indicates chance-level performance. Conditions are ordered by meanappropriateness score.However, this still falls well short of replicating human-to-humaninteraction. In human-to-human communication, individuals con-vey additional meaning through their gestures [ 14], which is basedon a shared mental model of the current conversation, themselves,and the conversation partner [ 15,16]. With this shared understand-ing, conversational partners can adapt their gestures to each otherand effectively convey meaningful information. Since our frame-work, and to the best of our knowledge all other available co-speechgesture approaches, lacks this essential insight into the conversa-tion partner, the generated gestures appear highly interchangeableto any human evaluator.Table 2: The Fréchet Gesture Distance (FGD) distance for eachablation modification, calculated both in the feature space(FGD F-space) and the raw data space (FGD R-space). For bothdistances, lower is better.Methods FGD F-space ↓FGD R-space↓natural motion 0.00 0.00w/o transformer 169.93 3334.14w/oγ-network 84.45 2667.33w/oβ-network 61.76 1879.82w/o audio 50.93 965.05w/o text 43.9 1099.48w/o main audio 34.98 758.62w/o inter text 31.26 767.28w/o main text 29.49 777.91w/o inter audio 28.54 680.66original 23.03 533.045 ABLATION STUDYIn order to assess the specific contributions of each componentwithin our proposed framework, we conducted an ablation study.First, different input configurations were investigated, includingthe exclusion of all textual input ("w/o text"), the exclusion of allaudio input ("w/o audio"), and the selective removal of these modal-ities for the main speaker ("w/o main audio" and "w/o main text")as well as for the interlocutor ("w/o inter audio" and "w/o intertext"). Furthermore, different architectural configurations were ex-plored, including deactivation of the output of the combined trans-former ("w/o transformer"), deactivation of the β-network ("w/oβ-network"), and exclusion of the multiplication process involvingtheγ-network (referred to as "w/o γ-network"). The distinction inthe generated gestures was measured by using the Fréchet GestureDistance (FGD), as defined by Yoon et al . [55] , for each modification.The evaluation of this distance was performed both in the featurespace of the autoencoder network given by the GENEA 2023 chal-lenge and in the context of the raw data space, similar to Ahujaet al. [2]. Detailed results are presented in Table 2. We make anexample video of all modifications available online1.As can be expected, each modification of the framework leadsto an increase in the FGD, both in the feature space and in theraw data space. In terms of the modality-specific inputs associatedwith the interactive partner, all modifications lead to a comparableincrease in the FGD. In particular, the removal of the interlocutor’saudio produced the smallest change, while the exclusion of the mainspeaker’s audio produced the largest change. The complete removalof both textual and audio information led to a sharp increase inFGD. Visual inspection of the generated gestures revealed instancesof elaborate but misaligned gestures in cases of audio removal,1https://vimeo.com/853326587ICMI ’23, October 9–13, 2023, Paris, France Harz et al.whereas small and infrequent gestures were observed followingtext removal.Looking at the modifications of the architectural configurations,it becomes clear that the transformer model has successfully learnedto generate the gestures since the removal leads to strongly de-graded performance and the largest increase in FGD of all modifi-cations. Similarly, the removal of the βnetwork and the γnetworkleads to a deterioration of the performance. Looking at the visualresults of the βnetwork, the gestures still show a natural fluidmovement but are mainly concentrated in front of the chest and donot show any obvious finger movement. On the other hand, the vi-sual results from the γnetwork show fast, erratic movements of thehands and upper body, with some unnatural poses. These resultssupport our intended design choices, with the γ-network focusingmainly on smoothing the temporal information of the generatedgestures, while the β-network refines the generated gestures toallow for more elaborate hand movements.6 CONCLUSIONOur framework presents a novel approach to co-speech gesturegeneration inspired by robotic imitation learning and based on abehavior cloning architecture. We combine a transformer architec-ture with a generative adversarial network to create a model thatranks in the top half of the GENEA Challenge 2023 [ 31]. Althoughthe model did not achieve results comparable to natural motion,we believe that additional training time and more sophisticatedinput segmentation could lead to improved results. An effectivestrategy may involve the use of only historical data in the FEINmodel to ensure that the input data consists only of aligned gesture,audio, and text data. In addition, the use of a finer-grained controlnetwork that distinguishes separate body parts, such as hands andarms, could have the potential to improve the generated gestures.Increasing the feedback provided by the discriminator model inlater stages of training is another way to improve performance,as the discriminator shows diminishing returns as training pro-gresses. Additionally, selectively freezing certain models within ourframework during later stages of training to focus on refining ges-tures could lead to performance improvements. Similarly, exploringalternative inference methods, such as predicting one frame at atime or adjusting the time window, may also help to improve thecapabilities of the framework. In conclusion, we believe that ourarchitecture demonstrates the potential to generate gestures thatexhibit some human-like characteristics, and we believe that thereare several ways in which our framework could be improved in thefuture. Finally, we hypothesize that the integration of frameworksintroduced in multimodal robot learning could further enhance theperformance of future gesture generation models.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, France
eaiY-C2JvOi
An interesting paper
8: Top 50% of accepted papers, clear accept
[Paper Summary] This paper proposed a co-speech generation framework combining transformer and Wasserstein GAN architecture. The model was inspired by BC-Z network architecture, and the authors replaced the vision network component in BC-Z with an attention-based network. The model takes audio, text, and speaker identity information from both the main agent and the interlocutor as inputs, alongside gestures from the interlocutor. The model was trained and tested on the Talking with Hands dataset. Based on the network performance, which was verified in terms of Human-Likeness, Appropriateness for agent speech, and Appropriateness for the interlocutor, the authors claimed that the solution showed good results and significant improvements in human-likeness over the GENEA baseline. [Strengths] 1. This is a well-investigated work, and the proposed approach seems to be interesting. It is nice to see the BC-Z network, designed originally for the robot imitation learning task, has been modified and adapted to the co-speech gesture generation domain. 2. The paper is also well-written and organized. The proposed approach has been explained in a careful manner. 3. Qualitative discussions have been made to reason about the low quality of the generated motions compared to related approaches in the GENEA 2023 challenge. [Weakness] There are several points that the authors may consider in the revised version: 1. Better clarification of the designed approach. What is the purpose of designing a $\beta$ and $\gamma$ network? In other words, which features did the authors aim to extract from that two individual networks? 2. Objective evaluation. If the position, velocity, and acceleration loss are implemented in the loss function. Conducting an ablation study to highlight the contribution of those losses to the total loss function L_total would be interesting. In this case, the authors can rely on objective metrics (that measure the errors of position, velocity, and acceleration between generated and GT motions) introduced in the previous GENEA challenges. 3. Qualitative results. For instance, a figure of generated motion can be included to support discussion about the lack of finger movement that the authors mentioned in section 4.1. 4. Typos. The paper should be proofread again to remove typos, for instance, L.389 on Page4.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
FovoQL3nygw
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation
["Leon Harz", "Hendric Vo\u00df", "Stefan Kopp"]
Human communication relies on multiple modalities such as verbal expressions, facial cues, and bodily gestures. Developing computational approaches to process and generate these multimodal signals is critical for seamless human-agent interaction. A particular challenge is the generation of co-speech gestures due to the large variability and number of gestures that can accompany a verbal utterance, leading to a one-to-many mapping problem. This paper presents an approach based on a Feature Extraction Infusion Network (FEIN-Z) that adopts insights from robot imitation learning and applies them to co-speech gesture generation. Building on the BC-Z architecture, our framework combines transformer architectures and Wasserstein generative adversarial networks. We describe the FEIN-Z methodology and evaluation results obtained within the GENEA Challenge 2023, demonstrating good results and significant improvements in human-likeness over the GENEA baseline. We discuss potential areas for improvement, such as refining input segmentation, employing more fine-grained control networks, and exploring alternative inference methods.
["machine learning", "deep learning", "co-speech gesture generation", "gesture synthesis", "multimodal data", "transformer", "behavior cloning", "reinforcement learning"]
ABSTRACTHuman communication relies on multiple modalities such as verbalexpressions, facial cues, and bodily gestures. Developing compu-tational approaches to process and generate these multimodal sig-nals is critical for seamless human-agent interaction. A particularchallenge is the generation of co-speech gestures due to the largevariability and number of gestures that can accompany a verbalutterance, leading to a one-to-many mapping problem. This paperpresents an approach based on a Feature Extraction Infusion Net-work (FEIN-Z) that adopts insights from robot imitation learningand applies them to co-speech gesture generation. Building on theBC-Z architecture, our framework combines transformer architec-tures and Wasserstein generative adversarial networks. We describethe FEIN-Z methodology and evaluation results obtained within theGENEA Challenge 2023, demonstrating good results and significantimprovements in human-likeness over the GENEA baseline. Wediscuss potential areas for improvement, such as refining inputsegmentation, employing more fine-grained control networks, andexploring alternative inference methods.CCS CONCEPTS•Human-centered computing →Interactive systems andtools ;Empirical studies in interaction design ;HCI theory, conceptsand models ;•Computing methodologies →Neural networks ;Learning latent representations ;Unsupervised learning .KEYWORDSmachine learning; deep learning; co-speech gesture generation;gesture synthesis; multimodal data; transformer; behavior cloning;reinforcement learningACM Reference Format:Leon Harz∗, Hendric Voß∗, and Stefan Kopp. 2023. FEIN-Z: Autoregres-sive Behavior Cloning for Speech-Driven Gesture Generation. In INTER-NATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23),October 9–13, 2023, Paris, France. ACM, New York, NY, USA, 10 pages.https://doi.org/10.1145/3577190.3616115∗Both authors contributed equally to the paperPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.36161151 INTRODUCTIONHuman communication is a multifaceted process that relies onvarious modalities, including verbal expressions, facial cues, andbodily gestures. Combining these modalities allows us to conveycomplex messages and facilitate meaningful interactions [ 9,50].Consequently, the development of machines that can process andgenerate these multi-modal signals is crucial to enable seamlessinteraction between humans and agents. A key aspect that makesgesture generation particularly challenging is the existence of multi-ple valid gestures for a given interaction. Unlike verbal expressions,which often have a single intended meaning, gestures can conveydifferent nuances and interpretations, leading to a one-to-manymapping problem [ 41]. Capturing this inherent variability and gen-erating contextually appropriate gestures is a complex task thatrequires careful consideration. The importance of gesture genera-tion extends beyond research to practical applications in real-worldscenarios and virtual environments. In human-robot interaction,gestures play a crucial role in enhancing communication and fa-cilitating natural interactions between humans and robotic agents[56]. Similarly, in virtual reality, realistic and expressive gesturescontribute to immersion and engagement, enabling more intuitiveand compelling experiences [ 35]. Therefore, the development ofrobust and effective gesture-generation methods has great potentialfor improving various areas of human-machine interaction.In this work, we propose the FEIN-Z framework, a combinationof the proposed Feature Extraction Infusion Network (FEIN) and thezero-shot learning aspect of the BC-Z architecture (Z). Inspired byrecent achievements in robotic imitation learning, we extend theBC-Z approach [ 27] intended to generalize robotic manipulationtasks to unseen problems, to the co-speech gesture generation do-main. As transformer architectures have shown promising resultsin a wide variety of domains [ 17,48], including co-speech gesturegeneration [ 38], we replace and extend multiple components of theoriginal BC-Z approach with a transformer architecture. Gener-ative adversarial networks (GAN) are widely used in the roboticand co-speech gesture generation domain [ 20,52]. Building uponthe insight gained from recent approaches [ 52], we propose to usea Wasserstein generative adversarial networks (WGAN) with aWasserstein divergence objective to guide our framework to gener-ate natural and expressive gestures. The released evaluation resultsof the GENEA Challenge 2023 show that our framework outper-forms the challenge baseline with regard to human-likeness bya significant margin and ranks in the top half of all evaluated ap-proaches [ 31]. In the next sections, we will first give a brief overviewof the existing work and current achievements of co-speech gestureICMI ’23, October 9–13, 2023, Paris, France Harz et al.generation (Section 2), before detailing the proposed FEIN-Z archi-tecture, the individual components, the data processing, and ourtraining procedure (Section 3). Finally, we will discuss the results ofthe performed evaluation (Section 4) and conclude with an outlookfor possible improvements of our work (Section 6).2 RELATED WORKGesture generation is an area of research that is rapidly progress-ing. Previous studies have explored various approaches, initiallyfocusing on rule-based methods [ 10,29,34,40] and simple com-putational models [ 8,19], and later transitioning to early machinelearning techniques [ 12,23]. Currently, data-driven approachesthat integrate multiple modalities are being employed [ 4,41,59],advancing the field even further.Initially, gesture generation relied on manually crafted rules,either directly applied to specific avatars or used in conjunctionwith computational models that estimated appropriate gesturesbased on accompanying speech [ 10,19,29,34]. Although these ap-proaches generally struggled to produce natural and fluent gestures,they did enable the creation of complex representative gesturesthat are challenging to achieve with current data-driven methods[5, 6, 29, 34].During the beginning of data-driven gesture generation, thefocus was primarily on single modalities, where gestures weregenerated based on previous gesture frames [ 47], textual inputs[12,56], or audio-driven inputs [ 18,21,23]. Recent research haswitnessed a notable shift towards the generation of multi-modalco-speech gestures. This approach integrates gestures with audio,text, and other input modalities to produce varied and natural ges-tures. To accomplish this, advanced techniques such as generaladversarial networks (GANs) [ 3,41,52,54,55], cyclic functions[26], glow networks with invertible convolutions [ 24], variationalautoencoders [ 38,46], and deep reinforcement learning have beenused [ 46]. Recurrent neural networks, specifically Bi-DirectionalLong Short-Term Memory (Bi-Directional LSTM) and gated recur-rent unit (GRU) [ 13,25], have demonstrated the ability to generatenatural co-speech gestures [ 23,57], with various adaptations ofrecurrent architectures still being utilized in recent approaches[28,30,44,51]. Notably, the incorporation of style embeddings hasfacilitated the generation of distinct gesture styles for individualspeakers, thereby enabling diverse variations in gestures that aretailored to specific styles or speakers [21, 55].Recent advancements in the field of co-speech gesture generationcan be broadly categorized into two main approaches: retrieval-based methods and learning-based methods. Retrieval-based meth-ods involve the creation or learning of predefined sets of gestureunits and employ techniques such as keyword matching, semanticanalysis, and prosody analysis to retrieve corresponding gesturesfrom a comprehensive database [ 59]. Conversely, learning-basedmethods focus on training models to directly predict co-speechgestures using paired co-speech gesture data [ 55]. In recent stud-ies, some researchers have automated the creation of gesture unitdatabases by leveraging training data. These gesture units are thenemployed to train deep learning models, enabling the generationof new and varied co-speech gestures [ 38]. Both retrieval-basedand learning-based methods have proven to be effective in address-ing the inherent challenge of one-to-many mapping in co-speechgestures [ 11,32,44,55]. Notably, recent work on retrieval-basedmethods have even demonstrated superior performance comparedto ground truth gestures [58, 59].Simultaneously, significant progress has been made in the realmof reinforcement learning for robot control, particularly in theutilization of text and visual data as input. Within this context,text data is commonly employed either as action descriptions orgoal descriptions. Recently, successful approaches have emergedleveraging large language models (LLMs), which generate suitableplans for given goals [ 1] [42] [36]. These approaches harness LLMsto break down goal descriptions into a sequence of feasible low-level actions expressed in natural language. Subsequently, the actiondescriptions undergo embedding and serve as additional input toa reinforcement learning model. As an example, PaLM-SayCanincorporates the BC-Z network [ 27] to acquire low-level robotskills by providing visual data of the current state alongside textdescriptions of planned actions.Both the co-speech gesture generation and reinforcement imita-tion learning domains share a common goal: to generate elaborateand complex outputs by acquiring knowledge from a relatively lim-ited data set. As the imitation learning domain has made significantprogress in minimizing the data requirements for generating com-plex outputs, we believe that these achievements can be leveragedin the gesture generation domain. Therefore, we propose our novelframework, which is built on the foundation of imitation learn-ing, with the expectation of extending these advances to gesturegeneration.3 MODEL AND METHODOur framework builds upon the BC-Z architecture by Jang et al .[27], which is a flexible imitation learning system that can learnfrom both demonstrations and interventions for a given Zero-Shottask. Similar to our approach, the BC-Z architecture generates itsoutput in an autoregressive manner. However, given the uniquedomain and data characteristics of co-speech gestures, we havemade several modifications to the backbone of the BC-Z architec-ture to adapt it to our domain. In particular, we replaced the visionnetwork component of BC-Z with an attention-based network thattakes inputs from each modality ( Transformer Network ). In addition,we refined the Feature-wise Linear Modulation (FiLM) network[43], while retaining the fundamental concept of linear modulationapplied to the previous embedding. We refer to this modified FiLMarchitecture as the Feature Extraction Infusion Network (FEIN) . Ourframework takes audio, text, and speaker identity information fromboth the main agent and the interlocutor as input, alongside ges-tures from the interlocutor. To incorporate the temporal dimensionof the provided data, we employ positional encoding techniquesproposed by Vaswani et al . [49] . The transformer network receivesaudio features, text features, and speaker identity information fromboth the main agent and the interlocutor. The FEIN module alsoutilizes this data, with the addition of previous t-gestures fromboth the main agent and the interlocutor. The output of the trans-former network is then combined with features extracted from theFEIN module. The resulting embedding is further processed by aFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, Francejoint-specific Fully Connected Network (FCN). In addition to thearchitectural refinements, we utilize a Wasserstein GAN networkwith gradient divergence (WGAN-div) to improve the generationperformance of our framework [ 53]. To enhance the generationperformance of our framework we employ a discriminator withan FCN consisting of four linear layers, using the leaky ReLU acti-vation function [ 39]. Figure 1 gives an overview of our approach.In the following sections, we will provide a detailed description ofthe sub-modules of this framework, including the attention-basednetwork, FEIN, and the control network.3.1 Transformer BlocksThe presented framework incorporates a total of four transformerblocks, each possessing a consistent underlying architecture withdistinct parameters. These blocks comprise a multi-attention headfollowed by a feedforward network. To augment the capabilitiesof the feedforward network, we have introduced the Swish-GatedLinear Unit (SwiGLU) activation function [ 45] into the transformerblocks. As a result, the output yof the transformer blocks can becomputed as follows:MultiHead(Q,K,V)=Concat(head 1,..., headn)W0=x(1)f(x)=Swish(x·W1)⊗(x·W2) (2)y=f(x)·W3 (3)In the above equations, MultiHead denotes the multi-headed atten-tion layer, Swish represents the swish activation function and Wcorresponds to the weights of the linear functions.3.2 Transformer NetworkThe BC-Z framework initially relied on visual data, specificallyimages, to predict robot actions based on the current context. How-ever, our specific scenario lacks visual data, therefore requiringmodifications to the original architecture. To address this challenge,we adopt a transformer network, known for its capacity to modellong-term dependencies within structured input data. Central toour approach is the integration of audio and text input from boththe main agent and the interlocutor. Particularly, audio and textdata are processed independently. For each input modality, theframework computes an attention-based embedding, which learnsthe information and relationships present within the data. Theindividual attention-based embeddings obtained in the precedingstep are then aggregated and passed through an additional multi-attention mechanism, known as the ’Combined Transformer’. Thiscombination stage aims to identify and encapsulate important cuesrelated to the interplay between audio and text data. The resultantcomposite embedding effectively captures salient information anddata relationships, forming the fundamental basis for subsequentprocesses.3.3 Feature Extraction Infusion Network (FEIN)The FiLM network initially used in the BC-Z approach [ 27] requiresa task description and a human demonstration video as inputs.However, this approach isn’t directly applicable to our specificcase. Therefore, we designed a novel network architecture thatestablishes connections between the current audio-text inputs andthe gestures observed in the previous time window. Our dual goalswere to ensure coherent gesture generation by conditioning onprevious gestures and to inject additional contextual informationinto the current context.To achieve these goals, we use three separate stacks of 1D con-volutional layers to process the concatenated audio-text data andgesture information. This approach results in an embedding withan enriched spatial feature space, effectively capturing importantspatial relationships. For meaningful interplay within these embed-dings, a multi-head attention mechanism is incorporated. In thismechanism, the gesture embedding served as both query and value,while the audio-text embedding acts as the key. The goal of thisattention-based embedding is to learn complex dependencies be-tween gestures and audio-text data. The resulting attention-basedembedding then traverses two different feed-forward networks.Each network consisted of two linear layers with SiLU activationfunctions to promote non-linearity and information propagation. Anormalization layer completes each network, ensuring consistentand stable feature representations. This architectural configurationaims to facilitate the extraction of two essential feature networks:theγ-network and the β-network. These networks contain criticalinformation for the following control model. Within the controlnetwork architecture, the role of the γ-network is to provide timinginformation about previous gestures to the embedding. This helpsto maintain gesture consistency across time windows and counter-act fragmented gestures. On the other hand, the β-network, due toits additive nature, provides nuanced details to the embedding. Thisfeature allows the framework to capture subtle gestures that mightbe suppressed by the relatively coarse influence of the γ-network.3.4 Control NetworkThe embedding network, derived from the transformer network,along with the γandβnetworks from the FEIN model, serve as in-puts for the control network. This network architecture is foundedTable 1: The employed joints and their corresponding cate-gorizations within the control networkBody part number of joints jointsroot 3 b_rootupper body 21 b_spine0, b_spine1,b_spine2, b_spine3,b_neck0, b_headleft leg 6 b_l_upleg, b_l_legright leg 6 b_r_upleg, b_r_legleft arm 18 b_l_shoulder, b_l_arm,b_l_arm_twist, b_l_forearm,b_l_wrist_twist, b_l_wristleft hand 48 b_l_pinky1 ...3, b_l_ring1...3,b_l_middle1...3, b_l_index1 ...3,b_l_thumb0...3right arm 18 b_r_shoulder, b_r_arm,b_r_arm_twist, b_r_forearm,b_r_wrist_twist, b_r_wristright hand 48 b_r_thumb0 ...3, b_r_pinky1 ...3,b_r_middle1 ...3, b_r_ring1...3,b_r_index1...3ICMI ’23, October 9–13, 2023, Paris, France Harz et al.Figure 1: Top: The proposed FEIN model with the convolutional embedder, transformer block, and γ- andβ-FCN. Bottom:Transformer model with transformer blocks. Right: Control network with convolutional layers and γandβinfusion. All inputs(Gesture, Text, Audio, Speaker ID) consist of concatenated speaker and interlocutor information. The subscripts (0:99) and(100:199) denote distinct time windows represented by the input data.on the framework proposed by Jang et al . [27] . Initially, the em-bedding undergoes convolutional layer processing, resulting in adistilled embedding. Subsequently, this distilled embedding is en-riched through element-wise multiplication with the γ-networkoutput, which effectively integrates contextual information fromthe FEIN module. A subsequent convolutional layer processes themodulated output, combining information and yielding a trans-formed embedding. To further infuse the embedding with contex-tual cues, the transformed embedding is subject to element-wiseaddition with the β-network output. This step augments the embed-ding with supplementary contextual information. Following a finalconvolutional layer, the output is normalized, yielding a vector thatmerges current relevant features with essential contextual informa-tion. This integration is pivotal for generating coherent gestures,especially when considering the influence of preceding gestures.This processed vector then progresses through a sequence of fullyconnected networks (FCNs), with each FCN generating joint con-figurations for specific body parts, see Figure 1. This design impartsfine-grained control over individual body parts, thus facilitatingprecise manipulation of the model’s movements. The employmentof independent body-part-specific FCNs allows the framework toextract distinct features from the shared embedding, enabling abody-part-specific feature space.3.5 LossThe loss functions used in our framework are defined as follows.For the discriminator, the loss function is given by:LDwdiv(x,D(z))=Dis(x)−Dis(D(z))+δ|∇ˆxDis(ˆx)|p(4)Here,Disrepresents the discriminator function, xrepresents theoriginal dataset, and zrepresents the reconstructed data. The hy-perparameter δcontrols the magnitude of the divergence penalty.The first component of the loss, Dis(x)−Dis(D(z)), measuresthe dissimilarity between the real sample xand the output of ourframework, D(z). The second term, δ|∇ˆxDis(ˆx)|p, corresponds tothe divergence penalty, which encourages the generated sampleD(z)to closely resemble the distribution of real data. The generatorloss function is defined as:LGwdiv =Dis(D(z)) (5)This loss function aims to minimize the output of the discriminator,specifically the evaluation of Dis(D(z)).For behavior cloning, we employ a scaled version of the smoothedL1 loss, defined as:L1= 0.5θ(xθ−zθ)2β, if|x−z|<βθ|xθ−zθ|−0.5β,otherwise(6)This loss function is applied to the positions yandˆy, velocitiesy′and ˆy′, and accelerations y′′and ˆy′′. For this, the gradients arecalculated using the following formula:f(y)=2∑︁i=0λidiydti(7)Lbc=L1(f(yi),f(ˆyi)) (8)In these equations, yrepresents the true gestures, while ˆydenotesthe predicted gestures. The function f(y)calculates the gradientsof the variable or function ywith respect to time. The superscriptFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualization for the human-likeness study,provided by the GENEA Challenge 2023 [ 31]. Our frameworkis labeled SE. Median ratings are shown as red bars (with 0.05CI) and mean ratings as yellow diamonds (with 0.05 CI). Boxedges indicate the 25th and 75th percentiles. Whiskers cover95% of ratings for each condition.iindiydtiindicates the order of the derivative, ranging from 0 to 2.Theλiterms are scaling factors applied to the position, velocity,and acceleration losses.The termLbccorresponds to the loss function used for back-propagation. It is computed as the average of the individual losstermsLiover a dataset of size N. EachLimeasures the dissimilaritybetween the calculated gradients f(yi)and the target gradientsf(y∗i). Together, this loss ensures a temporal consistency of thegenerated gestures. The overall loss function used in our frame-work is a combination of the behavior cloning loss ( Lbc) and thediscriminator loss ( LGwdiv):Ltotal=Lbc+ 1n·λgLGwdiv(9)Here, 1n(s)is an indicator function defined as:1n(s)=(1,ifs%n=00,otherwise(10)This indicator function is used to determine when to apply thediscriminator loss. The parameter ncontrols the frequency of ap-plying the discriminator loss, and the scaling factor λgadjusts therelative importance of the discriminator loss compared to the be-havior cloning loss. By combining these components, the overallloss function guides the training process to improve the quality andconsistency of the generated gestures.3.6 Data ProcessingThe Genea Challenge 2023 provided an adapted version of theTalking With Hands 16.2M dataset [ 33], extended to a dyadic set-ting involving both a speaker and an interlocutor. This dataset en-compasses various modalities, including 3D full-body gesture data,audio data, text transcripts, and the speaker ID, all organized sepa-rately for the speaker and the interlocutor. As part of the challenge,the data was pre-separated into a training set of 371 sequences, avalidation set of 40 sequences, and a test set of 69 sequences. Eachsequence is approximately 1 minute in length, with a sample rateof 44100 Hz for the audio data. The gesture data was recorded at 30frames per second. Since the challenge required the generation ofthe speaker for the test set, this data was omitted.For our approach, we built upon the preprocessing pipeline es-tablished by Chang et al . [11] , making necessary modifications tosuit our specific requirements. For the audio data, we used multiplefeature extraction techniques to obtain three different features: MelFrequency Cepstral Coefficients (MFCC) with 40 dimensions, MelSpectrograms with 64 filter banks, and prosody features. All audiofeatures were computed using a window length of 4096 and a hoplength of 1470. Regarding the text transcripts, we used the FastTextword embedding model [ 7], which assigns a 300-dimensional vectorrepresentation to each word in the transcript. Since the temporalduration of each word is known, we generated a vector of size [se-quence length, 300] containing the corresponding word embeddingvector for each word’s duration. For the gesture data, we trans-formed the rotation of each body and finger joint in the BVH fileinto an exponential map representation [ 22]. This transformationresulted in 56 3D body joints for the gesture data.In the post-processing phase of the gesture output, we performedtwo operations. First, we clipped the angle of each generated bodyjoint to be within the range of the 2nd and 98th percentiles ofthe corresponding joint in the training data. This clipping stepensured that the generated angles remained within a reasonablerange. Afterward, we applied a rolling window calculation over 50frames to smooth the generated output and improve its temporalcoherence.3.7 Training procedureThe training procedure incorporates both behavior cloning and theWGAN architecture. In our setup, the network is responsible forgenerating gestures, while the discriminator is used to discriminatebetween the generated data and the original data. We chose a batchsize of 128 and a sequence length of 200 frames, which correspondsto two frame windows: t−1:=[0−99]andt0:=[100−199]. For theoptimizer, we use AdamW [ 37] with a weight decay parameter of0.01 for both the FEIN network and the discriminator. For the FEINmodel, we select a learning rate of 5e−5, while the discriminatorutilizes a learning rate of 1e−4. During training, we set the scalingfactorλgto0.05.The audio and text data used in training comes from t0, whilethe gesture data is sourced from t−1. After each prediction step, weoptimize the model using the loss function described in 9, and weoptimize the discriminator accordingly using its loss function, asdefined in 4. To prevent the network from consistently outperform-ing the discriminator and to stabilize the training, we apply the 5loss only every n=4steps. In total, we trained our framework for60 epochs. Every 10 epochs, we computed the validation loss andused the best-performing model to generate the evaluation data.ICMI ’23, October 9–13, 2023, Paris, France Harz et al.4 EVALUATIONDuring the training phase of the framework, we conducted a thor-ough analysis of various framework configurations, experimentingwith different numbers of transformer blocks and parameters. Wealso explored frameworks that generated gestures for both the mainagent and the interlocutor, as well as different input data for theFEIN model. Among these tested frameworks, many did not yieldsatisfactory results in terms of generating realistic and coherentgestures. As a result, we selected the framework proposed in thisstudy as the most suitable for our purposes.The main evaluation of the framework was performed along-side other approaches within the GENEA Challenge 2023. Sincethe evaluation of generated co-speech gestures is largely subjec-tive and objective measures that strongly correlate with subjec-tive evaluations are lacking [ 41], the evaluation focused primar-ily on subjective measures. Three specific aspects were evaluated:"Human-Likeness", "Appropriateness for Agent Speech", and "Ap-propriateness for the Interlocutor". To ensure anonymity, all pub-lished results were anonymized and assigned unique labels. Ourframework was labeled SE.4.1 Human-LikenessThe results of the Human-Likeness evaluation are shown in Figure2, illustrating the rating distribution obtained for the different ap-proaches. Figure 3 highlights the significant differences betweenthe competitors. Here, our framework receives significantly higherratings than the dyadic baseline ( BD), the monadic baseline ( BM),as well as the approaches SH,SD,SI,SK,SA,SB, and SC. On...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significant differences between all approaches, pro-vided by GENEA Challenge 2023 [ 31]. Our framework is la-beled SE. White indicates that the condition on the y-axis israted significantly higher than the one on the x-axis, whileblack indicates the opposite (y-rated below x). Gray indicatesno statistically significant difference at a significance levelofα=0.05, after applying the Holm-Bonferroni correction.the other hand, compared to the natural motion ( NA) and the ap-proaches SGandSF, our framework receives significantly lowerratings for human-likeness. There were no significant differences interms of human-likeness between our approach and the approachesSJandSL.A significant limitation of our approach, especially concerninghuman-like gesturing, was the lack of finger movement in all of thegenerated gestures. Although we trained our framework to produceoutput for the finger bones, the resulting gestures consistently ex-hibited a static finger position. Any changes observed in the fingerbones were primarily intended to prevent the introduction of arti-facts, rather than to add meaningful information to the generatedgestures.Another notable issue was the rapid change of poses in ourframework. Although the evaluation only captured footage fromthe knees up, to prevent any foot sliding from influencing the eval-uation, our model consistently exhibited movements that involveda redistribution of weight in the lower part of the torso. Such move-ments may have compromised the naturalness of the generatedgestures and led to a lower ranking in the human-likeness evalua-tion.4.2 AppropriatenessThe results of the speech appropriateness evaluation for the mainagent are depicted in Figure 4a. These ratings indicate the likelihoodof each framework being preferred with matching or mismatchinggestures. Our proposed framework, labeled SE, demonstrates sta-tistical significance in terms of speech appropriateness comparedto random chance. However, it is notably inferior to frameworkSG, which exhibits significantly better performance. Additionally,there is no significant difference between our framework and theapproaches SJ,SF,SK,SD,SI,SK,SB,SA, and SHin terms ofspeech appropriateness. The results of the appropriateness of ges-tures in response to the interlocutor are presented in Figure 4b.These ratings reflect the likelihood of each framework being pre-ferred with matching or mismatching gestures. Our frameworkdoes not exhibit statistical significance compared to random chancein this aspect. Our model does achieve a significantly higher meanappropriateness score (MAS) compared to frameworks SGandSH,and a significantly lower MAS compared to the natural motionNA. Furthermore, our model does not differ significantly from thedyadic and monadic baselines, as well as frameworks SA,SB,SL,SF,SI,SD,SJ,SC, and SK, in terms of appropriateness of gesturesin response to the interlocutor.The evaluation results presented here show a notable discrepancywhen compared to the results of the human similarity evaluation.While our framework is able to generate co-speech gestures thatare perceived as more human-like than the baseline used in thechallenge, this does not mean that the generated gestures are per-ceived as more appropriate for the given context than the baseline.Although the lack of finger bone information could be a possibleexplanation for this, we suggest that it is indicative of a generalproblem common to all current approaches to co-speech gesturegeneration. Current approaches excel at producing gestures that ap-pear natural and unobtrusive within a given conversation, which isalready a commendable achievement for human-agent interaction.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched (b) Appropriateness for the interlocutorFigure 4: Bar plots visualizing the response distribution in the appropriateness studies, provided by the GENEA Challenge 2023[31]. Our framework is labeled SE. The blue bar (bottom) represents responses where subjects preferred the matched motion,the light grey bar (middle) represents tied responses, and the red bar (top) represents responses preferring mismatched motion,with the height of each bar being proportional to the fraction of responses in each category. Lighter colors correspond to slightpreference, and darker colors to clear preference. On top of each bar is also a confidence interval for the mean appropriatenessscore, scaled to fit the current axes. The dotted black line indicates chance-level performance. Conditions are ordered by meanappropriateness score.However, this still falls well short of replicating human-to-humaninteraction. In human-to-human communication, individuals con-vey additional meaning through their gestures [ 14], which is basedon a shared mental model of the current conversation, themselves,and the conversation partner [ 15,16]. With this shared understand-ing, conversational partners can adapt their gestures to each otherand effectively convey meaningful information. Since our frame-work, and to the best of our knowledge all other available co-speechgesture approaches, lacks this essential insight into the conversa-tion partner, the generated gestures appear highly interchangeableto any human evaluator.Table 2: The Fréchet Gesture Distance (FGD) distance for eachablation modification, calculated both in the feature space(FGD F-space) and the raw data space (FGD R-space). For bothdistances, lower is better.Methods FGD F-space ↓FGD R-space↓natural motion 0.00 0.00w/o transformer 169.93 3334.14w/oγ-network 84.45 2667.33w/oβ-network 61.76 1879.82w/o audio 50.93 965.05w/o text 43.9 1099.48w/o main audio 34.98 758.62w/o inter text 31.26 767.28w/o main text 29.49 777.91w/o inter audio 28.54 680.66original 23.03 533.045 ABLATION STUDYIn order to assess the specific contributions of each componentwithin our proposed framework, we conducted an ablation study.First, different input configurations were investigated, includingthe exclusion of all textual input ("w/o text"), the exclusion of allaudio input ("w/o audio"), and the selective removal of these modal-ities for the main speaker ("w/o main audio" and "w/o main text")as well as for the interlocutor ("w/o inter audio" and "w/o intertext"). Furthermore, different architectural configurations were ex-plored, including deactivation of the output of the combined trans-former ("w/o transformer"), deactivation of the β-network ("w/oβ-network"), and exclusion of the multiplication process involvingtheγ-network (referred to as "w/o γ-network"). The distinction inthe generated gestures was measured by using the Fréchet GestureDistance (FGD), as defined by Yoon et al . [55] , for each modification.The evaluation of this distance was performed both in the featurespace of the autoencoder network given by the GENEA 2023 chal-lenge and in the context of the raw data space, similar to Ahujaet al. [2]. Detailed results are presented in Table 2. We make anexample video of all modifications available online1.As can be expected, each modification of the framework leadsto an increase in the FGD, both in the feature space and in theraw data space. In terms of the modality-specific inputs associatedwith the interactive partner, all modifications lead to a comparableincrease in the FGD. In particular, the removal of the interlocutor’saudio produced the smallest change, while the exclusion of the mainspeaker’s audio produced the largest change. The complete removalof both textual and audio information led to a sharp increase inFGD. Visual inspection of the generated gestures revealed instancesof elaborate but misaligned gestures in cases of audio removal,1https://vimeo.com/853326587ICMI ’23, October 9–13, 2023, Paris, France Harz et al.whereas small and infrequent gestures were observed followingtext removal.Looking at the modifications of the architectural configurations,it becomes clear that the transformer model has successfully learnedto generate the gestures since the removal leads to strongly de-graded performance and the largest increase in FGD of all modifi-cations. Similarly, the removal of the βnetwork and the γnetworkleads to a deterioration of the performance. Looking at the visualresults of the βnetwork, the gestures still show a natural fluidmovement but are mainly concentrated in front of the chest and donot show any obvious finger movement. On the other hand, the vi-sual results from the γnetwork show fast, erratic movements of thehands and upper body, with some unnatural poses. These resultssupport our intended design choices, with the γ-network focusingmainly on smoothing the temporal information of the generatedgestures, while the β-network refines the generated gestures toallow for more elaborate hand movements.6 CONCLUSIONOur framework presents a novel approach to co-speech gesturegeneration inspired by robotic imitation learning and based on abehavior cloning architecture. We combine a transformer architec-ture with a generative adversarial network to create a model thatranks in the top half of the GENEA Challenge 2023 [ 31]. Althoughthe model did not achieve results comparable to natural motion,we believe that additional training time and more sophisticatedinput segmentation could lead to improved results. An effectivestrategy may involve the use of only historical data in the FEINmodel to ensure that the input data consists only of aligned gesture,audio, and text data. In addition, the use of a finer-grained controlnetwork that distinguishes separate body parts, such as hands andarms, could have the potential to improve the generated gestures.Increasing the feedback provided by the discriminator model inlater stages of training is another way to improve performance,as the discriminator shows diminishing returns as training pro-gresses. Additionally, selectively freezing certain models within ourframework during later stages of training to focus on refining ges-tures could lead to performance improvements. Similarly, exploringalternative inference methods, such as predicting one frame at atime or adjusting the time window, may also help to improve thecapabilities of the framework. In conclusion, we believe that ourarchitecture demonstrates the potential to generate gestures thatexhibit some human-like characteristics, and we believe that thereare several ways in which our framework could be improved in thefuture. Finally, we hypothesize that the integration of frameworksintroduced in multimodal robot learning could further enhance theperformance of future gesture generation models.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, France
-N9mJYHPInI
Interesting adaptation of imitation learning approach
6: Marginally above acceptance threshold
This paper proposes a novel model based on the BC-Z imitation learning model with several architectural improvements and modifications for gesture generation. The proposed approach is sound. It is interesting that the authors are motivated by imitation learning, which have not yet been actively applied in the gesture generation field. Comments and questions: - While adopting an imitation learning approach is an interesting direction, the authors should elaborate some more to explain why imitation learning is beneficial in gesture generation. For example, what components of the proposed method contribute to “generate elaborate and complex outputs by acquiring knowledge from a relatively limited data set” (lines 188-190)? - Some explanations of the proposed method need to be included. * What network architecture is used as the discriminator? * In Figure 1, what are the number in parenthesis, e.g., (0:100), (100:200)? * Why did you use a higher learning rate for the discriminator? * Lines 574-576: “Among these tested frameworks, ... coherent gestures.” Is it possible to provide some analysis (quantitative or qualitative) on this aspect? How about visualizing gestures, e.g., without WGAN, FEIN, etc.? - Typo: * Figure 1: Word2Vec -> fastText * Line 389: Fort his -> For this * Line 396: y* -> \hat{y}?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
zrcgseqv0n2
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The DiffuseStyleGesture+ entry to the GENEA Challenge 2023
["Sicheng Yang", "Haiwei Xue", "Zhensong Zhang", "Minglei Li", "Zhiyong Wu", "Xiaofei Wu", "Songcen Xu", "Zonghong Dai"]
In this paper, we introduce the DiffuseStyleGesture+, our solution for the Generation and Evaluation of Non-verbal Behavior for Embodied Agents (GENEA) Challenge 2023, which aims to foster the development of realistic, automated systems for generating conversational gestures. Participants are provided with a pre-processed dataset and their systems are evaluated through crowdsourced scoring. Our proposed model, DiffuseStyleGesture+, leverages a diffusion model to generate gestures automatically. It incorporates a variety of modalities, including audio, text, speaker ID, and seed gestures. These diverse modalities are mapped to a hidden space and processed by a modified diffusion model to produce the corresponding gesture for a given speech input. Upon evaluation, the DiffuseStyleGesture+ demonstrated performance on par with the top-tier models in the challenge, showing no significant differences with those models in human-likeness, appropriateness for the interlocutor, and achieving competitive performance with the best model on appropriateness for agent speech. This indicates that our model is competitive and effective in generating realistic and appropriate gestures for given speech. The code, pre-trained models, and demos are available at https://github.com/YoungSeng/DiffuseStyleGesture/tree/DiffuseStyleGesturePlus/BEAT-TWH-main.
["gesture generation", "diffusion-based model", "conversation gesture"]
ABSTRACTIn this paper, we introduce the DiffuseStyleGesture+, our solutionfor the Generation and Evaluation of Non-verbal Behavior for Em-bodied Agents (GENEA) Challenge 2023, which aims to foster thedevelopment of realistic, automated systems for generating conver-sational gestures. Participants are provided with a pre-processeddataset and their systems are evaluated through crowdsourcedscoring. Our proposed model, DiffuseStyleGesture+, leverages adiffusion model to generate gestures automatically. It incorporatesa variety of modalities, including audio, text, speaker ID, and seedgestures. These diverse modalities are mapped to a hidden spaceand processed by a modified diffusion model to produce the corre-sponding gesture for a given speech input. Upon evaluation, theDiffuseStyleGesture+ demonstrated performance on par with thetop-tier models in the challenge, showing no significant differenceswith those models in human-likeness, appropriateness for the in-terlocutor, and achieving competitive performance with the bestmodel on appropriateness for agent speech. This indicates that ourmodel is competitive and effective in generating realistic and ap-propriate gestures for given speech. The code, pre-trained models,and demos are available at this URL.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) ;•Computing methodologies →Motion processing ;Neural networks .∗Both authors contributed equally to this research.†Corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’23, October 09-13 , 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXKEYWORDSgesture generation, diffusion-based model, conversation gestureACM Reference Format:Sicheng Yang, Haiwei Xue, Zhiyong Wu, Minglei Li, Zonghong Dai, Zhen-song Zhang, Songcen Xu, and Xiaofei Wu. 2023. The DiffuseStyleGesture+entry to the GENEA Challenge 2023. In Proceedings of ACM InternationalConference on Multimodal Interaction (ICMI ’23). ACM, New York, NY, USA,7 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONNon-verbal behaviors, particularly gestures, act a crucial role in ourcommunication [ 24]. They provide the necessary spark to animaterobotic interfaces, encapsulate diverse functional information, andsubtly deliver social cues. We can create more engaging, informative,and socially adept robotic systems by incorporating these behaviors.And gestures enrich communication with non-verbal nuances [ 24,39]. Indeed, natural conversations often incorporate body gestures,which can lead to perceptions of dullness or unnaturalness if absent.Individuals use gestures to express ideas and feelings, either directlyor indirectly. For instance, the formation of a circle using the thumband forefinger—an open palm gesture—communicates the conceptof “OK” [32].3D gesture generation has drawn much attention in the com-munity. Early studies leveraged unimodal inputs, Dai et al. [ 10]employ audio features to drive gesture synthesis via Bi-LSTMs, andsome works incorporate GANs and VAEs to learn relevant pairsand improve synthesis quality [ 19,26,34]. However, these meth-ods encountered challenges such as gesture diversity and trainingdifficulties. On the other hand, some works also explored textualmodality, Chiu et al. [ 6] introducing the DCNF model combiningspeech, textual content, and prosody, and Yoon et al. [ 38] propos-ing an Encoder-Decoder framework. Liang et al. [ 20] introducesSEmantic Energized Generation (SEEG), a novel approach that ex-cels at semantic-aware gesture generation. Recently, multimodalmethods [1, 9, 35, 37] integrating both audio and text have gainedattention, focusing on the semantic feature encoding and long se-quence modeling of 3D human motion. Further, many works beginICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.to pay attention to the speaker’s identity [ 21,22], style [ 8,33], emo-tion [ 25,36], etc. Despite significant advances, gesture generationusing a comprehensive multimodal approach remains challenging,mainly due to the inherent trade-off between quality and diversity[33].Recently, diffusion models [ 11] have shown great potential forgenerating motions [ 7,29,41], achieving high-quality outputs whilemaintaining diversity. Hence, in this gesture generation challenge,we attempt to apply diffusion models to tackle the problem ofmultimodal gesture generation.Inspired by [ 33], we find that the diffusion model-based approachfor co-speech gesture generation surpasses other deep generativemodels of motion in terms of quality and alignment with speech,while allowing for the generation of stylized and diverse gestures.In this paper, we incorporate textual modality using the DiffuseS-tyleGesture framework and restructure the architecture. Further-more, we also refined the representations of gesture and audio, inalignment with the challenge dataset. These enhancements allowthe model to generate high-quality, speech-aligned, speaker-specificstylized, and diverse gestures with significant controllability. Wesubmitted our system to the GENEA challenge 2023 [ 16], whichaims to consolidate and compare various methods for co-speechgesture generation and evaluation, promoting the development ofnon-verbal behavior generation and its evaluation via a large-scaleuser study involving a common dataset and virtual agent.The main contributions of our paper are: (1) We propose Dif-fuseStyleGesture+, a multimodal-driven gesture generation modelwith improved input network structure, input modality and featurerepresentation, as well as the diffusion model with cross-local atten-tion. (2) The evaluation of the GENEA Challenge demonstrates thatour model is among the first tier at human-likeness, appropriate-ness for the interlocutor, and achieves competitive performance onappropriateness for agent speech. (3) The ablation study validatesthe effectiveness of our proposed denoising module. Besides, wediscuss the stylization and diversity of the generated gestures, aswell as further discussion of more technical details.2 METHODOur method is based on DiffuseStyleGesture [ 33], a recent diffusionmodel-based speech-driven gesture generation approach. Besidesseed gesture, audio and speaker ID, we also take text as an additionalinput modality. The overview of this work is shown in Figure 1.2.1 Feature ExtractionWe extract the features of the input modalities as follows:•Gesture: We used 62 joints including the fingers, and eachframe represents the motion features in terms of position,velocity, acceleration, rotation matrix, rotational angularvelocity, and rotational angular acceleration of each joint.Although there are certain relations between positions, veloc-ities, accelerations, etc., which can be transformed into eachother, representing motion features with more motion datacan lead to better performance [ 8,40]. We denote the naturalmocap gestures clip as x0∈R(Nseed+N)×[62×(9+3)×3]. ThefirstNseed frames of the gestures clip x0are used as the seed... ,dt dtxDiffuse0→(T d −1)DenoisingSamplec~(,, )d dTTx 0IDenoisingcDiffuse0→(t d -1)...Denoisingc11,xAudio Feature ExtractionNoisy gestureSeed gesture RMSpeaker ID RMConcatCross Local AttentionSelf AttentionHuber loss~( , )tx0It ~ Uniform({1,2,...,T })GestureDenoisingRPEText“I have a book... ”Feature ExtractionSdTDATGZ0ˆx0x0ˆx00,ˆx 0ˆxd dFigure 1: (Top) Denoising module. A noising step tdand anoisy gesture sequence xtat this noising step conditioningonc(including seed gesture, audio, speaker ID and text) arefed into the model. (Bottom) Sample module. At each noisingsteptd, we predict the ˆx0with the denoising process, then addthe noise to the noising step xtd−1with the diffuse process.This process is repeated from td=Tduntiltd=0.gesture and the remaining Nframes are what the modelneeds to predict based on text and audio.•Audio: More speech features also lead to better performance[4,15]. Different representations can complement each other,e.g., representations such as pitch contain rhythmic con-tent, the pre-trained model features such as WavLM [ 5]contain more complex information such as emotion, On-sets contain beat information, etc. We combine MFCC, MelSpectrum, Pitch, Energy [ 39], WavLM [ 5], and Onsets [ 2]as audio features. We denote the features of audio clip asA∈RN×(40+64+2+2+1024+1).•Speaker ID: The ID of the speaker is represented as one-hotvectors where only one element of a selected ID is nonzero.The Talking With Hands dataset has a total of 17 speakers,so the dimension of speaker ID is 17.•Text: Following [ 39], we use FastText [ 3] to obtain the 300-Dword embeddings. And we use one bit to indicate whetherthere is a laugh or not, and the last bit is set to 0 as [ 4]. Eachword is mapped to its pre-trained word embedding at word-level granularity. Then the features of text clip T∈RN×302.2.2 Gesture DenoisingUnlike text semantics-driven motion generation [ 13,29,41], theyonly need a token to contain the semantics of a sentence, whichhaven’t to be aligned with time. Gesture generation is temporallyperceptible, that is, the gestures are related to the rhythm of thespeech. So we perform linear interpolation of the extracted audioThe DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, Francefeatures Ain the temporal dimension in order to align with the ges-tures. Gestures and music-driven dance generation [ 28,30,42] arealso different. Gestures and semantics are also temporally related,for example, the hand opens when saying ’big’. As in [ 4,37], weuse frame-level aligned word vectors T.Our goal is to synthesize a high-quality and speech-matchedhuman gesture ˆxof lengthNgiven conditions cusing the diffusionmodel [ 11]. Following [ 29], we predict the signal itself instead ofpredicting noise at each noising step td. As shown in the top ofFigure 1, the Denoising module reconstructs the original gesturex0from the pure noise xt, noising step tdand conditions c.ˆx0=Denoisextd,td,c(1)wherec=[S,D,A,T]. During training, noising step tdis sampledfrom a uniform distribution of {1,2,...,T d}, with the position en-coding [ 31].xtdis the noisy gesture with the same dimension asthe real gesture x0obtained by sampling from the standard normaldistributionN(0,I).We add the information of the noising step Tdand speaker IDSto form Zand replicate and stack them into a sequence featureof lengthNseed+N. The overall attention mechanism is similar to[33], using cross-local attention [ 27], self-attention [ 31] and relativeposition encoding (RPE) [ 14]. The difference is that we conditionDin the firstNseed frames and AandTin the lastNframes, sothat the smooth transition between segments is considered in thefirstNseed frames and the corresponding gestures are generatedin the lastNframes based on audio and text, which reduce theredundancy of inputs.Then the Denoising module is trained by optimizing the Huberloss [ 12] between the generated gestures ˆx0and the real humangesturesx0:L=Ex0∼q(x0|c),td∼[1,Td][HuberLoss(x0−ˆx0)] (2)2.3 Gesture SamplingAs shown in the bottom of Figure 1, when sampling, the initial noisygesturexTis sampled from the standard normal distribution andthe otherxtd,td<Tdis the result of the previous noising step. Thefinal gesture is given by splicing a number of clips of length N. Theseed gesture for the first clip is a gesture from the dataset. Then theseed gesture for other clips is the last Nseed frames of the gesturegenerated in the previous clip. For every clip, in every noisingsteptd, we predict the clean gesture ˆx0using Equation (1) and addGaussian noise to the noising step xtd−1with the diffuse process[11]. This process is repeated from td=Tduntilx0is reached.3 EXPERIMENT3.1 Experiment SettingWe trained on all the data in the GENEA Challenge 2023 [ 16] train-ing dataset, which is based on Talking With Hands [ 18]. In thiswork, gesture data are cropped to a length of 150 frames (5 seconds,30 fps), with the first Nseed=30frames as seed gesture, and the lastN=120frames to calculate the loss between generated gesturesand real gestures in Equation (2). We use standard normalization(zero mean and unit variant) to all joint feature dimensions. TheHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95% of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.latent dimension of the attention-based encoder is 512. The cross-local attention networks use 8 heads, 48 attention channels, thewindow size is 15 frames (0.5 second), each window looks at theone in front of it, and with a dropout of 0.1. As for self-attentionnetworks are composed of 8 layers, 8 heads, and with a dropout of0.1. AdamW [ 23] optimizer (learning rate is 3 ×10−5) is used witha batch size of 200 for 1200000 samples. Our models have beentrained with Td= 1000 noising steps and a cosine noise schedule.The whole framework can be learned in about 132 hours on oneNVIDIA V100 GPU.3.2 Evaluation SettingThe challenge organizers conducted a detailed evaluation compar-ing all submitted systems [ 16]. Three proportions were evaluated:human-likeness, appropriateness for agent speech and appropri-ateness for the interlocutor. We strongly recommend the reference[16] for more details on the evaluation. The following abbreviationsare used to denote each model in the evaluation:•NA: Natural mocap (‘NA’ for ‘natural’).•BM: The official monadic baseline [ 4], a model based onTacotron 2 that takes information (WAV audio, TSV tran-scriptions, and speaker ID) from the main agent as input (‘B’for ‘baseline’, ‘M’ for ‘monadic’).•BD: The official dyadic baseline [ 4], which also take informa-tion from the interlocutor in the conversation into accountwhen generating gesture (‘D’ for ‘dyadic’).•SA–SL: 12 submissions (ours is SF) to the final evaluation(‘S’ for a submission).3.3 Evaluation Analysis3.3.1 Human-likeness. As for human-likeness, participants wereasked “Please indicate on a sliding scale how human-like the gesturemotion appears”. The rating scale from 100 (best) to 0 (worst) isanchored by partitioning the sliders into five equal-length intervalsICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.NA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...(a) Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to speechNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...(b) Appropriateness for the interlocutorFigure 3: Significant differences between conditions in thetwo appropriateness studies. White means the conditionlisted on the y-axis achieved a mean appropriateness scoresignificantly above the condition on the x-axis, black meansthe opposite ( yscored below x), and grey means no statisti-cally significant difference at level α= 0.05 after correctionfor the false discovery rate.labeled “Excellent”, “Good”, “Fair”, “Poor”, and “Bad”. Bar plots andsignificance comparisons are shown in Figure 2. The median ofour system (SF) was 65 ∈[64, 67] and the mean was 63.6 ±1.3. Andthe human-likeness was not significantly different from the systemSG [16]. This result shows that our model can generate very high-quality gestures, but somewhat lower than natural mocap, with amedian of 71∈[70, 71] and a mean of 68.4 ±1.0.3.3.2 Appropriateness for agent speech. In terms of appropriate-ness for agent speech, participants were asked “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?” Five response options areavailable, “Left is clearly better”, “Left is slightly better”, “Theyare equal”, “Right is slightly better”, and “Right is clearly better”.Table 1: Ablation studies results. ’ +’ indicates additional mod-ules and↔indicates the length of the modality in the timedimension. Bold indicates the best metric.NameFGD onfeature space↓FGD on rawdata space↓Ours 14.461 531.172+ Seed gesture↔N+ Speech↔Nseed(DiffuseStyleGesture [33])19.017 767.503+ Seed gesture↔(N+Nseed) 15.539 616.437The mean appropriateness score (MAS) of the submitted system isclose to each other, so we report significant differences as shownin Figure 8(a). Our system (SF) with a MAS of 0.20 ±0.06 and a Pref.matched (identifies how often test-takers preferred matched motionin terms of appropriateness) of 55.8%, which is significantly betterthan submitted systems SH, SL and BC. However, it has significantdeficiencies with natural mocap (NA) with a MAS of 0.81 ±0.06 anda Pref. matched 73.6% and SG.3.3.3 Appropriateness for the interlocutor. Additionally, an inter-locutor who converses with the previous main agent is added tothis user interface for scoring. Please ref to [ 16] for more details. Asfor appropriateness for the interlocutor, participants were asked “Inwhich of the two videos is the Main Agent’s motion better suitedfor the interaction?”. The response options were the same as before,i.e., “Left is clearly better”, “Left is slightly better”, “They are equal”,“Right is slightly better”, and “Right is clearly better”. We also reportsignificant differences as shown in Figure 8(b). Natural mocap (NA)with a MAS of 0.63 ±0.08 and a Pref. matched of 69.8% is signifi-cantly more appropriate for the interlocutor compared to all otherconditions. Our system (SF) with a MAS of 0.04 ±0.06 and a Pref.matched of 51.5%, which is significantly more appropriate thanconditions SG and SH, and not significantly different from otherconditions. And our system does not use interlocutor informationand (as expected) is not significantly different from chance.3.4 Ablation StudiesMoreover, we conduct ablation studies to address the performanceeffects of different architectures in our model. We use Fréchet ges-ture distance (FGD) [ 37] as the objective evaluation metric, whichis currently the closest to human perception among all objectiveevaluation metrics [ 17]. The lower FGD, the better. The FGD iscomputed using the autoencoder provided by the challenge orga-nizers. Our ablation studies, as summarized in Table 1, indicate thatwhen the input of [ 33] is used (the information of seed gestures andspeech is given directly over the full length of a training sample),both metrics perform worse; when additional seed gestures aregiven over the full length of a training sample on our model, bothmetrics also become worse. The purpose of using seed gestures[33,37] is to smooth the transition between generated segments,so they should not contain speech information and should only beconsidered at the beginning for consistency with the previouslygenerated gestures. We also learn that although the diffusion modelhas the ability to learn useful information from redundant repre-sentations, careful design of the network structure of the denoisingmodule can further improve performance.The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, France3.5 Discussion3.5.1 Takeaways. Our co-speech gesture generation model (SF),based on the diffusion model, exhibits comparable levels of human-likeness and appropriateness for the interlocutor when comparedto the best performing models (SG, SA). Furthermore, it achievescompetitive performance with the leading model (SG) in terms ofappropriateness for agent speech. These findings suggest that ourproposed model performs at a top-tier level. Our model achievesgood results due to the ability of the diffusion model to generatehigh-quality gestures and the local attention-based structure togenerate gestures that correspond to the current short durationof speech. Notably, based on the diffusion model, this can easilygenerate diverse gestures since the main part of the input is noiseand any seed gesture can be set. Moreover, based on the structure ofthe diffusion model, we add random masks to the denoising module,which enables the interpolation and extrapolation of conditionssuch as speaker identity (style), and a high degree of control overthe style intensity of the generated gestures. However, stylizationand diversity are not included as one of the evaluation dimensionsin the challenge.3.5.2 Limitation. Our model does not consider the information ofthe interlocutor, this is also not significantly different from a randomselection. Taking into account information about the interlocutoris important in the interaction, and this is a direction for futureresearch. Moreover, pre-processing the data should make the resultsbetter. We do not do anything special with motions that do notinclude movement in the hand and still train with its hand, whichcan lead to poorer hand results. For the exploration of the datasetand more discussion, please refer to the Appendix.3.5.3 More Discussion. We also tried to add the BEAT [ 21] dataset(all of them / some of the speakers) to train together with TalkingWith Hands, but we got worse results, the model didn’t converge.We guess the possible reason is that the BEAT dataset is very large,and the diffusion model needs more time to be trained well.Although we did not consider interlocutors, in terms of appro-priateness for the interlocutor, our system (SF) is significantly moreappropriate than SG and SH, and not significantly different fromother conditions. It is worth noting that SG is the best-performingmodel on the first two dimensions of the evaluation. We suspectthat the reason for this is related to the setting of the evaluation,cause “segments should be more or less complete phrases” in theevaluation. However, the evaluation during silence is equally impor-tant, and the model should learn the behavior from the data whennot talking, such as idling and other small gestures, and no otherunexpected actions. Although we did not consider the informationof interlocutors, it is impressive that our model is able to remainidle while the other person is talking (when he/she is not talking).The diffusion model takes a long time to train and inference.The evaluation was performed using 8-10 seconds of speech, andlonger speech evaluation results may be more consistent with hu-man perception. When the number of participants in the speechappropriateness evaluation was 448, there was no difference be-tween our system (SF) and SG; when the number of participantsin the evaluation was increased to 600, SG was significantly betterthan all of the submitted systems, which suggests the differences(a) A gesture indicating largeness. (b) A pointing gesture.(c) A thinking gesture.Figure 4: Case study of generated gestures. The right side ofeach figure shows the generated gestures.between the two systems were relatively small and needed to bestatistically significant until a large number of subjects had beenrecruited and evaluated after FDR correction.3.5.4 Case Study. Our diffusion-based method can extract seman-tic information and generate human-like gestures. For instance,when the speaker says “large”, our system generates a gesture indi-cating largeness. When the speaker asks “Where do you stay?” oursystem generates a pointing gesture, mimicking human behavior.Our diffusion-based models can generate incidental actions forlaughter and surprise. For example, when the speaker laughs, themodel generates a body shake, mimicking human laughter. Whenthe speaker is thinking, the model generates a corresponding think-ing action. This suggests that diffusion-based models can learnsemantics and synthesize semantic actions in specific situations.4 CONCLUSIONIn this paper, we propose DiffuseStyleGesture+, a diffusion modelbased method for speech-driven co-gesture generation. Based onthe DiffuseStyleGesture framework, we add text modality and thenmore logically designed the input architecture of the modality,while tuning the representation of gesture and audio according tothe challenge dataset to be able to generate high-quality, speech-matched, speaker-specific stylized, and diverse gestures and tobe highly controllable on these conditions. The proposed modelis in the first tier in human-likeness and appropriateness for theinterlocutor, with no significant difference from the best model,and achieves competitive performance with the best model onappropriateness for agent speech, showing the effectiveness ofthe proposed method. However, compared with the natural mocap,there is still much room for improvement worth further exploration.ACKNOWLEDGMENTSThis work is supported by National Natural Science Foundationof China (62076144), Shenzhen Science and Technology Program(WDZC20200818121348001) and Shenzhen Key Laboratory of nextgeneration interactive media innovative technology (ZDSYS2021062-3092001004).ICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.
DAnJjo2VB3
Review of the DiffuseStyleGesture+ entry to the GENEA Challenge 2023
8: Top 50% of accepted papers, clear accept
The modified version of DiffuseStyleGesture framework integrates textual input and additional speech and gesture representations. The evaluation results indicate that this model is able to generate high-quality gesture motions comparing with the ground-truth motions and the other submitted systems. Comments and questions, Speech features: Lines 172-173: Can you explain the benefit of incorporating the supplementary speech features into the network? It would be helpful to compare the Fréchet gesture distance (FGD) results when including and excluding each speech representation. Text features: Lines 181-184: There are various actions such as laughter, silence, surprise and so on. Could you explain more about the motivation/benefit of including only laugh information in the text features? How to get laugh information from the text input? Discussion: Lines 88-94 Both speech and textual input are used in DiffuseStyleGesture. However, there is no mention of the advantages of using the textual modality. It would be interesting to include an ablation test to discuss the impact of textual modality on the results. Reproducibility: Lines 24-26 The authors will release the code and pretrained models after the acceptance of the paper.
3: The reviewer is fairly confident that the evaluation is correct
zrcgseqv0n2
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The DiffuseStyleGesture+ entry to the GENEA Challenge 2023
["Sicheng Yang", "Haiwei Xue", "Zhensong Zhang", "Minglei Li", "Zhiyong Wu", "Xiaofei Wu", "Songcen Xu", "Zonghong Dai"]
In this paper, we introduce the DiffuseStyleGesture+, our solution for the Generation and Evaluation of Non-verbal Behavior for Embodied Agents (GENEA) Challenge 2023, which aims to foster the development of realistic, automated systems for generating conversational gestures. Participants are provided with a pre-processed dataset and their systems are evaluated through crowdsourced scoring. Our proposed model, DiffuseStyleGesture+, leverages a diffusion model to generate gestures automatically. It incorporates a variety of modalities, including audio, text, speaker ID, and seed gestures. These diverse modalities are mapped to a hidden space and processed by a modified diffusion model to produce the corresponding gesture for a given speech input. Upon evaluation, the DiffuseStyleGesture+ demonstrated performance on par with the top-tier models in the challenge, showing no significant differences with those models in human-likeness, appropriateness for the interlocutor, and achieving competitive performance with the best model on appropriateness for agent speech. This indicates that our model is competitive and effective in generating realistic and appropriate gestures for given speech. The code, pre-trained models, and demos are available at https://github.com/YoungSeng/DiffuseStyleGesture/tree/DiffuseStyleGesturePlus/BEAT-TWH-main.
["gesture generation", "diffusion-based model", "conversation gesture"]
ABSTRACTIn this paper, we introduce the DiffuseStyleGesture+, our solutionfor the Generation and Evaluation of Non-verbal Behavior for Em-bodied Agents (GENEA) Challenge 2023, which aims to foster thedevelopment of realistic, automated systems for generating conver-sational gestures. Participants are provided with a pre-processeddataset and their systems are evaluated through crowdsourcedscoring. Our proposed model, DiffuseStyleGesture+, leverages adiffusion model to generate gestures automatically. It incorporatesa variety of modalities, including audio, text, speaker ID, and seedgestures. These diverse modalities are mapped to a hidden spaceand processed by a modified diffusion model to produce the corre-sponding gesture for a given speech input. Upon evaluation, theDiffuseStyleGesture+ demonstrated performance on par with thetop-tier models in the challenge, showing no significant differenceswith those models in human-likeness, appropriateness for the in-terlocutor, and achieving competitive performance with the bestmodel on appropriateness for agent speech. This indicates that ourmodel is competitive and effective in generating realistic and ap-propriate gestures for given speech. The code, pre-trained models,and demos are available at this URL.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) ;•Computing methodologies →Motion processing ;Neural networks .∗Both authors contributed equally to this research.†Corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’23, October 09-13 , 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXKEYWORDSgesture generation, diffusion-based model, conversation gestureACM Reference Format:Sicheng Yang, Haiwei Xue, Zhiyong Wu, Minglei Li, Zonghong Dai, Zhen-song Zhang, Songcen Xu, and Xiaofei Wu. 2023. The DiffuseStyleGesture+entry to the GENEA Challenge 2023. In Proceedings of ACM InternationalConference on Multimodal Interaction (ICMI ’23). ACM, New York, NY, USA,7 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONNon-verbal behaviors, particularly gestures, act a crucial role in ourcommunication [ 24]. They provide the necessary spark to animaterobotic interfaces, encapsulate diverse functional information, andsubtly deliver social cues. We can create more engaging, informative,and socially adept robotic systems by incorporating these behaviors.And gestures enrich communication with non-verbal nuances [ 24,39]. Indeed, natural conversations often incorporate body gestures,which can lead to perceptions of dullness or unnaturalness if absent.Individuals use gestures to express ideas and feelings, either directlyor indirectly. For instance, the formation of a circle using the thumband forefinger—an open palm gesture—communicates the conceptof “OK” [32].3D gesture generation has drawn much attention in the com-munity. Early studies leveraged unimodal inputs, Dai et al. [ 10]employ audio features to drive gesture synthesis via Bi-LSTMs, andsome works incorporate GANs and VAEs to learn relevant pairsand improve synthesis quality [ 19,26,34]. However, these meth-ods encountered challenges such as gesture diversity and trainingdifficulties. On the other hand, some works also explored textualmodality, Chiu et al. [ 6] introducing the DCNF model combiningspeech, textual content, and prosody, and Yoon et al. [ 38] propos-ing an Encoder-Decoder framework. Liang et al. [ 20] introducesSEmantic Energized Generation (SEEG), a novel approach that ex-cels at semantic-aware gesture generation. Recently, multimodalmethods [1, 9, 35, 37] integrating both audio and text have gainedattention, focusing on the semantic feature encoding and long se-quence modeling of 3D human motion. Further, many works beginICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.to pay attention to the speaker’s identity [ 21,22], style [ 8,33], emo-tion [ 25,36], etc. Despite significant advances, gesture generationusing a comprehensive multimodal approach remains challenging,mainly due to the inherent trade-off between quality and diversity[33].Recently, diffusion models [ 11] have shown great potential forgenerating motions [ 7,29,41], achieving high-quality outputs whilemaintaining diversity. Hence, in this gesture generation challenge,we attempt to apply diffusion models to tackle the problem ofmultimodal gesture generation.Inspired by [ 33], we find that the diffusion model-based approachfor co-speech gesture generation surpasses other deep generativemodels of motion in terms of quality and alignment with speech,while allowing for the generation of stylized and diverse gestures.In this paper, we incorporate textual modality using the DiffuseS-tyleGesture framework and restructure the architecture. Further-more, we also refined the representations of gesture and audio, inalignment with the challenge dataset. These enhancements allowthe model to generate high-quality, speech-aligned, speaker-specificstylized, and diverse gestures with significant controllability. Wesubmitted our system to the GENEA challenge 2023 [ 16], whichaims to consolidate and compare various methods for co-speechgesture generation and evaluation, promoting the development ofnon-verbal behavior generation and its evaluation via a large-scaleuser study involving a common dataset and virtual agent.The main contributions of our paper are: (1) We propose Dif-fuseStyleGesture+, a multimodal-driven gesture generation modelwith improved input network structure, input modality and featurerepresentation, as well as the diffusion model with cross-local atten-tion. (2) The evaluation of the GENEA Challenge demonstrates thatour model is among the first tier at human-likeness, appropriate-ness for the interlocutor, and achieves competitive performance onappropriateness for agent speech. (3) The ablation study validatesthe effectiveness of our proposed denoising module. Besides, wediscuss the stylization and diversity of the generated gestures, aswell as further discussion of more technical details.2 METHODOur method is based on DiffuseStyleGesture [ 33], a recent diffusionmodel-based speech-driven gesture generation approach. Besidesseed gesture, audio and speaker ID, we also take text as an additionalinput modality. The overview of this work is shown in Figure 1.2.1 Feature ExtractionWe extract the features of the input modalities as follows:•Gesture: We used 62 joints including the fingers, and eachframe represents the motion features in terms of position,velocity, acceleration, rotation matrix, rotational angularvelocity, and rotational angular acceleration of each joint.Although there are certain relations between positions, veloc-ities, accelerations, etc., which can be transformed into eachother, representing motion features with more motion datacan lead to better performance [ 8,40]. We denote the naturalmocap gestures clip as x0∈R(Nseed+N)×[62×(9+3)×3]. ThefirstNseed frames of the gestures clip x0are used as the seed... ,dt dtxDiffuse0→(T d −1)DenoisingSamplec~(,, )d dTTx 0IDenoisingcDiffuse0→(t d -1)...Denoisingc11,xAudio Feature ExtractionNoisy gestureSeed gesture RMSpeaker ID RMConcatCross Local AttentionSelf AttentionHuber loss~( , )tx0It ~ Uniform({1,2,...,T })GestureDenoisingRPEText“I have a book... ”Feature ExtractionSdTDATGZ0ˆx0x0ˆx00,ˆx 0ˆxd dFigure 1: (Top) Denoising module. A noising step tdand anoisy gesture sequence xtat this noising step conditioningonc(including seed gesture, audio, speaker ID and text) arefed into the model. (Bottom) Sample module. At each noisingsteptd, we predict the ˆx0with the denoising process, then addthe noise to the noising step xtd−1with the diffuse process.This process is repeated from td=Tduntiltd=0.gesture and the remaining Nframes are what the modelneeds to predict based on text and audio.•Audio: More speech features also lead to better performance[4,15]. Different representations can complement each other,e.g., representations such as pitch contain rhythmic con-tent, the pre-trained model features such as WavLM [ 5]contain more complex information such as emotion, On-sets contain beat information, etc. We combine MFCC, MelSpectrum, Pitch, Energy [ 39], WavLM [ 5], and Onsets [ 2]as audio features. We denote the features of audio clip asA∈RN×(40+64+2+2+1024+1).•Speaker ID: The ID of the speaker is represented as one-hotvectors where only one element of a selected ID is nonzero.The Talking With Hands dataset has a total of 17 speakers,so the dimension of speaker ID is 17.•Text: Following [ 39], we use FastText [ 3] to obtain the 300-Dword embeddings. And we use one bit to indicate whetherthere is a laugh or not, and the last bit is set to 0 as [ 4]. Eachword is mapped to its pre-trained word embedding at word-level granularity. Then the features of text clip T∈RN×302.2.2 Gesture DenoisingUnlike text semantics-driven motion generation [ 13,29,41], theyonly need a token to contain the semantics of a sentence, whichhaven’t to be aligned with time. Gesture generation is temporallyperceptible, that is, the gestures are related to the rhythm of thespeech. So we perform linear interpolation of the extracted audioThe DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, Francefeatures Ain the temporal dimension in order to align with the ges-tures. Gestures and music-driven dance generation [ 28,30,42] arealso different. Gestures and semantics are also temporally related,for example, the hand opens when saying ’big’. As in [ 4,37], weuse frame-level aligned word vectors T.Our goal is to synthesize a high-quality and speech-matchedhuman gesture ˆxof lengthNgiven conditions cusing the diffusionmodel [ 11]. Following [ 29], we predict the signal itself instead ofpredicting noise at each noising step td. As shown in the top ofFigure 1, the Denoising module reconstructs the original gesturex0from the pure noise xt, noising step tdand conditions c.ˆx0=Denoisextd,td,c(1)wherec=[S,D,A,T]. During training, noising step tdis sampledfrom a uniform distribution of {1,2,...,T d}, with the position en-coding [ 31].xtdis the noisy gesture with the same dimension asthe real gesture x0obtained by sampling from the standard normaldistributionN(0,I).We add the information of the noising step Tdand speaker IDSto form Zand replicate and stack them into a sequence featureof lengthNseed+N. The overall attention mechanism is similar to[33], using cross-local attention [ 27], self-attention [ 31] and relativeposition encoding (RPE) [ 14]. The difference is that we conditionDin the firstNseed frames and AandTin the lastNframes, sothat the smooth transition between segments is considered in thefirstNseed frames and the corresponding gestures are generatedin the lastNframes based on audio and text, which reduce theredundancy of inputs.Then the Denoising module is trained by optimizing the Huberloss [ 12] between the generated gestures ˆx0and the real humangesturesx0:L=Ex0∼q(x0|c),td∼[1,Td][HuberLoss(x0−ˆx0)] (2)2.3 Gesture SamplingAs shown in the bottom of Figure 1, when sampling, the initial noisygesturexTis sampled from the standard normal distribution andthe otherxtd,td<Tdis the result of the previous noising step. Thefinal gesture is given by splicing a number of clips of length N. Theseed gesture for the first clip is a gesture from the dataset. Then theseed gesture for other clips is the last Nseed frames of the gesturegenerated in the previous clip. For every clip, in every noisingsteptd, we predict the clean gesture ˆx0using Equation (1) and addGaussian noise to the noising step xtd−1with the diffuse process[11]. This process is repeated from td=Tduntilx0is reached.3 EXPERIMENT3.1 Experiment SettingWe trained on all the data in the GENEA Challenge 2023 [ 16] train-ing dataset, which is based on Talking With Hands [ 18]. In thiswork, gesture data are cropped to a length of 150 frames (5 seconds,30 fps), with the first Nseed=30frames as seed gesture, and the lastN=120frames to calculate the loss between generated gesturesand real gestures in Equation (2). We use standard normalization(zero mean and unit variant) to all joint feature dimensions. TheHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95% of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.latent dimension of the attention-based encoder is 512. The cross-local attention networks use 8 heads, 48 attention channels, thewindow size is 15 frames (0.5 second), each window looks at theone in front of it, and with a dropout of 0.1. As for self-attentionnetworks are composed of 8 layers, 8 heads, and with a dropout of0.1. AdamW [ 23] optimizer (learning rate is 3 ×10−5) is used witha batch size of 200 for 1200000 samples. Our models have beentrained with Td= 1000 noising steps and a cosine noise schedule.The whole framework can be learned in about 132 hours on oneNVIDIA V100 GPU.3.2 Evaluation SettingThe challenge organizers conducted a detailed evaluation compar-ing all submitted systems [ 16]. Three proportions were evaluated:human-likeness, appropriateness for agent speech and appropri-ateness for the interlocutor. We strongly recommend the reference[16] for more details on the evaluation. The following abbreviationsare used to denote each model in the evaluation:•NA: Natural mocap (‘NA’ for ‘natural’).•BM: The official monadic baseline [ 4], a model based onTacotron 2 that takes information (WAV audio, TSV tran-scriptions, and speaker ID) from the main agent as input (‘B’for ‘baseline’, ‘M’ for ‘monadic’).•BD: The official dyadic baseline [ 4], which also take informa-tion from the interlocutor in the conversation into accountwhen generating gesture (‘D’ for ‘dyadic’).•SA–SL: 12 submissions (ours is SF) to the final evaluation(‘S’ for a submission).3.3 Evaluation Analysis3.3.1 Human-likeness. As for human-likeness, participants wereasked “Please indicate on a sliding scale how human-like the gesturemotion appears”. The rating scale from 100 (best) to 0 (worst) isanchored by partitioning the sliders into five equal-length intervalsICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.NA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...(a) Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to speechNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...(b) Appropriateness for the interlocutorFigure 3: Significant differences between conditions in thetwo appropriateness studies. White means the conditionlisted on the y-axis achieved a mean appropriateness scoresignificantly above the condition on the x-axis, black meansthe opposite ( yscored below x), and grey means no statisti-cally significant difference at level α= 0.05 after correctionfor the false discovery rate.labeled “Excellent”, “Good”, “Fair”, “Poor”, and “Bad”. Bar plots andsignificance comparisons are shown in Figure 2. The median ofour system (SF) was 65 ∈[64, 67] and the mean was 63.6 ±1.3. Andthe human-likeness was not significantly different from the systemSG [16]. This result shows that our model can generate very high-quality gestures, but somewhat lower than natural mocap, with amedian of 71∈[70, 71] and a mean of 68.4 ±1.0.3.3.2 Appropriateness for agent speech. In terms of appropriate-ness for agent speech, participants were asked “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?” Five response options areavailable, “Left is clearly better”, “Left is slightly better”, “Theyare equal”, “Right is slightly better”, and “Right is clearly better”.Table 1: Ablation studies results. ’ +’ indicates additional mod-ules and↔indicates the length of the modality in the timedimension. Bold indicates the best metric.NameFGD onfeature space↓FGD on rawdata space↓Ours 14.461 531.172+ Seed gesture↔N+ Speech↔Nseed(DiffuseStyleGesture [33])19.017 767.503+ Seed gesture↔(N+Nseed) 15.539 616.437The mean appropriateness score (MAS) of the submitted system isclose to each other, so we report significant differences as shownin Figure 8(a). Our system (SF) with a MAS of 0.20 ±0.06 and a Pref.matched (identifies how often test-takers preferred matched motionin terms of appropriateness) of 55.8%, which is significantly betterthan submitted systems SH, SL and BC. However, it has significantdeficiencies with natural mocap (NA) with a MAS of 0.81 ±0.06 anda Pref. matched 73.6% and SG.3.3.3 Appropriateness for the interlocutor. Additionally, an inter-locutor who converses with the previous main agent is added tothis user interface for scoring. Please ref to [ 16] for more details. Asfor appropriateness for the interlocutor, participants were asked “Inwhich of the two videos is the Main Agent’s motion better suitedfor the interaction?”. The response options were the same as before,i.e., “Left is clearly better”, “Left is slightly better”, “They are equal”,“Right is slightly better”, and “Right is clearly better”. We also reportsignificant differences as shown in Figure 8(b). Natural mocap (NA)with a MAS of 0.63 ±0.08 and a Pref. matched of 69.8% is signifi-cantly more appropriate for the interlocutor compared to all otherconditions. Our system (SF) with a MAS of 0.04 ±0.06 and a Pref.matched of 51.5%, which is significantly more appropriate thanconditions SG and SH, and not significantly different from otherconditions. And our system does not use interlocutor informationand (as expected) is not significantly different from chance.3.4 Ablation StudiesMoreover, we conduct ablation studies to address the performanceeffects of different architectures in our model. We use Fréchet ges-ture distance (FGD) [ 37] as the objective evaluation metric, whichis currently the closest to human perception among all objectiveevaluation metrics [ 17]. The lower FGD, the better. The FGD iscomputed using the autoencoder provided by the challenge orga-nizers. Our ablation studies, as summarized in Table 1, indicate thatwhen the input of [ 33] is used (the information of seed gestures andspeech is given directly over the full length of a training sample),both metrics perform worse; when additional seed gestures aregiven over the full length of a training sample on our model, bothmetrics also become worse. The purpose of using seed gestures[33,37] is to smooth the transition between generated segments,so they should not contain speech information and should only beconsidered at the beginning for consistency with the previouslygenerated gestures. We also learn that although the diffusion modelhas the ability to learn useful information from redundant repre-sentations, careful design of the network structure of the denoisingmodule can further improve performance.The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, France3.5 Discussion3.5.1 Takeaways. Our co-speech gesture generation model (SF),based on the diffusion model, exhibits comparable levels of human-likeness and appropriateness for the interlocutor when comparedto the best performing models (SG, SA). Furthermore, it achievescompetitive performance with the leading model (SG) in terms ofappropriateness for agent speech. These findings suggest that ourproposed model performs at a top-tier level. Our model achievesgood results due to the ability of the diffusion model to generatehigh-quality gestures and the local attention-based structure togenerate gestures that correspond to the current short durationof speech. Notably, based on the diffusion model, this can easilygenerate diverse gestures since the main part of the input is noiseand any seed gesture can be set. Moreover, based on the structure ofthe diffusion model, we add random masks to the denoising module,which enables the interpolation and extrapolation of conditionssuch as speaker identity (style), and a high degree of control overthe style intensity of the generated gestures. However, stylizationand diversity are not included as one of the evaluation dimensionsin the challenge.3.5.2 Limitation. Our model does not consider the information ofthe interlocutor, this is also not significantly different from a randomselection. Taking into account information about the interlocutoris important in the interaction, and this is a direction for futureresearch. Moreover, pre-processing the data should make the resultsbetter. We do not do anything special with motions that do notinclude movement in the hand and still train with its hand, whichcan lead to poorer hand results. For the exploration of the datasetand more discussion, please refer to the Appendix.3.5.3 More Discussion. We also tried to add the BEAT [ 21] dataset(all of them / some of the speakers) to train together with TalkingWith Hands, but we got worse results, the model didn’t converge.We guess the possible reason is that the BEAT dataset is very large,and the diffusion model needs more time to be trained well.Although we did not consider interlocutors, in terms of appro-priateness for the interlocutor, our system (SF) is significantly moreappropriate than SG and SH, and not significantly different fromother conditions. It is worth noting that SG is the best-performingmodel on the first two dimensions of the evaluation. We suspectthat the reason for this is related to the setting of the evaluation,cause “segments should be more or less complete phrases” in theevaluation. However, the evaluation during silence is equally impor-tant, and the model should learn the behavior from the data whennot talking, such as idling and other small gestures, and no otherunexpected actions. Although we did not consider the informationof interlocutors, it is impressive that our model is able to remainidle while the other person is talking (when he/she is not talking).The diffusion model takes a long time to train and inference.The evaluation was performed using 8-10 seconds of speech, andlonger speech evaluation results may be more consistent with hu-man perception. When the number of participants in the speechappropriateness evaluation was 448, there was no difference be-tween our system (SF) and SG; when the number of participantsin the evaluation was increased to 600, SG was significantly betterthan all of the submitted systems, which suggests the differences(a) A gesture indicating largeness. (b) A pointing gesture.(c) A thinking gesture.Figure 4: Case study of generated gestures. The right side ofeach figure shows the generated gestures.between the two systems were relatively small and needed to bestatistically significant until a large number of subjects had beenrecruited and evaluated after FDR correction.3.5.4 Case Study. Our diffusion-based method can extract seman-tic information and generate human-like gestures. For instance,when the speaker says “large”, our system generates a gesture indi-cating largeness. When the speaker asks “Where do you stay?” oursystem generates a pointing gesture, mimicking human behavior.Our diffusion-based models can generate incidental actions forlaughter and surprise. For example, when the speaker laughs, themodel generates a body shake, mimicking human laughter. Whenthe speaker is thinking, the model generates a corresponding think-ing action. This suggests that diffusion-based models can learnsemantics and synthesize semantic actions in specific situations.4 CONCLUSIONIn this paper, we propose DiffuseStyleGesture+, a diffusion modelbased method for speech-driven co-gesture generation. Based onthe DiffuseStyleGesture framework, we add text modality and thenmore logically designed the input architecture of the modality,while tuning the representation of gesture and audio according tothe challenge dataset to be able to generate high-quality, speech-matched, speaker-specific stylized, and diverse gestures and tobe highly controllable on these conditions. The proposed modelis in the first tier in human-likeness and appropriateness for theinterlocutor, with no significant difference from the best model,and achieves competitive performance with the best model onappropriateness for agent speech, showing the effectiveness ofthe proposed method. However, compared with the natural mocap,there is still much room for improvement worth further exploration.ACKNOWLEDGMENTSThis work is supported by National Natural Science Foundationof China (62076144), Shenzhen Science and Technology Program(WDZC20200818121348001) and Shenzhen Key Laboratory of nextgeneration interactive media innovative technology (ZDSYS2021062-3092001004).ICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.
ywm4aONALi0
A good paper describing a well-performing monadic challenge submission
8: Top 50% of accepted papers, clear accept
Greatest strengths: * The paper is well written gives and relevant details for replicating and learning from the work, as system-description papers should * The work is of particular interest because the associated system demonstrated particularly competitive human-likeness in the evaluation * The appendix providing observations about the dataset, some negative results, etc., adds value * Releasing code and pre-trained models, as the abstract commits to, promises to be of great use to the community Main weakness: * The claims (e.g., abstract line 20 and discussion line 425) that there were no statistically significant differences to top-tier models in terms of appropriateness for the interlocutor are technically correct, but they can easily be read to imply top-tier interlocutor awareness, when in fact the system does not appear to be aware of interlocutor behaviour at all. To instead state that the degree of interlocutor awareness was not statistically different from chance would be equally correct and less likely to suggest an interlocutor-ware system. The authors should consider presenting the results regarding appropriateness for the interlocutor in a more nuanced manner. ----- Below follow a few detailed comments on the submission, in order of the appearance: > Line 88: "we find that the diffusion model-based approach for co-speech gesture generation surpasses existing methods in terms of quality, style, diversity, and alignment with speech." Two comments: 1) The proposed approach is unlikely to have surpassed the motion-graph based approach of GestureMaster in the previous GENEA Challenge, since that achieved better human-likeness ratings than the mocap itself. The statement starting on line 88 should be qualified to be more clear which existing methods are being referred to, e.g., "surpasses other deep generative models of motion in terms of...". 2) "we find that" The results/findings reported in the current paper do not allow concluding that diffusion models are better than other deep generative approaches in terms of style and diversity. It is probably the truth, but is not a finding of the studies in the paper. For example, it is not clear which other models in the challenge evaluation that were diffusion models. > Line 97: "GENEA 2023 challenge" The official way to write this is "GENEA Challenge 2023" > Line 155: "We used 62 joints including the fingers" How were BVH files without finger mocap handled? (Optionally, if the authors think their handling of missing finger mocap was notable for the results they achieved, they might choose to comment on that somewhere.) > Line 242: "We train on the official training dataset of Talking With Hands [18] provided by GENEA 2023 [16]" Two remarks: 1) The GENEA Challenge 2023 dataset is not the official training set of Talking With Hands In fact, this reviewer believes that the Talking With Hands data actually is not partitioned into training and test sets at all. It would be better to write something like "We trained on the official training dataset of the GENEA Challenge 2023 [16], which is based on Talking With Hands [18]." 2) Since the Talking With Hands mocap data is not always perfect, it would be useful to write something like "We trained on all the data in the GENEA Challenge 2023 training dataset", to be unambiguously clear that no data was excluded due to poor mocap, missing fingers, or other reasons. (Conversely, if training data was excluded, the text should specify that, and how the selection was performed.) > Line 267 (and elsewhere): "NA: Ground truth" This reviewer recommends against using the term "ground truth" in gesture generation, since (unlike many classical machine learning problems) there is no single, "true" way to move, even to a given speech. "Natural mocap" is one alternative that can be used instead of writing "ground truth". > Line 328: "3.3.3 Appropriateness for the interlocutor" The reviewer materials also mention testing whether or not appropriateness is different from chance, by looking at whether or not the MAS confidence intervals overlap with zero. Given that all challenge submissions were close to chance performance in this evaluation, it might be more useful to consider whether or not one's system performed significantly different from chance when discussing the results of this particular user study. It is the understanding of this reviewer that the presented system did not use interlocutor information, and (consistent with expectations) was not significantly different from chance. It would be good to mention this. > Line 771: "When the number of participants in the speech appropriateness evaluation was 448, there was no difference between our system (SF) and SG" This wording makes it sound like a difference did not exist until additional subjects were recruited. That is not quite right. The difference was always there between the two systems, and it was always the same size (the systems did not change when the number of test takers was expanded); it is just that the difference was relatively small, and therefore it did not become statistically significant in the evaluation after FDR correction until a large number of subjects were recruited. Tweaking the wording could make the statement in the paper more accurate.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
zrcgseqv0n2
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The DiffuseStyleGesture+ entry to the GENEA Challenge 2023
["Sicheng Yang", "Haiwei Xue", "Zhensong Zhang", "Minglei Li", "Zhiyong Wu", "Xiaofei Wu", "Songcen Xu", "Zonghong Dai"]
In this paper, we introduce the DiffuseStyleGesture+, our solution for the Generation and Evaluation of Non-verbal Behavior for Embodied Agents (GENEA) Challenge 2023, which aims to foster the development of realistic, automated systems for generating conversational gestures. Participants are provided with a pre-processed dataset and their systems are evaluated through crowdsourced scoring. Our proposed model, DiffuseStyleGesture+, leverages a diffusion model to generate gestures automatically. It incorporates a variety of modalities, including audio, text, speaker ID, and seed gestures. These diverse modalities are mapped to a hidden space and processed by a modified diffusion model to produce the corresponding gesture for a given speech input. Upon evaluation, the DiffuseStyleGesture+ demonstrated performance on par with the top-tier models in the challenge, showing no significant differences with those models in human-likeness, appropriateness for the interlocutor, and achieving competitive performance with the best model on appropriateness for agent speech. This indicates that our model is competitive and effective in generating realistic and appropriate gestures for given speech. The code, pre-trained models, and demos are available at https://github.com/YoungSeng/DiffuseStyleGesture/tree/DiffuseStyleGesturePlus/BEAT-TWH-main.
["gesture generation", "diffusion-based model", "conversation gesture"]
ABSTRACTIn this paper, we introduce the DiffuseStyleGesture+, our solutionfor the Generation and Evaluation of Non-verbal Behavior for Em-bodied Agents (GENEA) Challenge 2023, which aims to foster thedevelopment of realistic, automated systems for generating conver-sational gestures. Participants are provided with a pre-processeddataset and their systems are evaluated through crowdsourcedscoring. Our proposed model, DiffuseStyleGesture+, leverages adiffusion model to generate gestures automatically. It incorporatesa variety of modalities, including audio, text, speaker ID, and seedgestures. These diverse modalities are mapped to a hidden spaceand processed by a modified diffusion model to produce the corre-sponding gesture for a given speech input. Upon evaluation, theDiffuseStyleGesture+ demonstrated performance on par with thetop-tier models in the challenge, showing no significant differenceswith those models in human-likeness, appropriateness for the in-terlocutor, and achieving competitive performance with the bestmodel on appropriateness for agent speech. This indicates that ourmodel is competitive and effective in generating realistic and ap-propriate gestures for given speech. The code, pre-trained models,and demos are available at this URL.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) ;•Computing methodologies →Motion processing ;Neural networks .∗Both authors contributed equally to this research.†Corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’23, October 09-13 , 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXKEYWORDSgesture generation, diffusion-based model, conversation gestureACM Reference Format:Sicheng Yang, Haiwei Xue, Zhiyong Wu, Minglei Li, Zonghong Dai, Zhen-song Zhang, Songcen Xu, and Xiaofei Wu. 2023. The DiffuseStyleGesture+entry to the GENEA Challenge 2023. In Proceedings of ACM InternationalConference on Multimodal Interaction (ICMI ’23). ACM, New York, NY, USA,7 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONNon-verbal behaviors, particularly gestures, act a crucial role in ourcommunication [ 24]. They provide the necessary spark to animaterobotic interfaces, encapsulate diverse functional information, andsubtly deliver social cues. We can create more engaging, informative,and socially adept robotic systems by incorporating these behaviors.And gestures enrich communication with non-verbal nuances [ 24,39]. Indeed, natural conversations often incorporate body gestures,which can lead to perceptions of dullness or unnaturalness if absent.Individuals use gestures to express ideas and feelings, either directlyor indirectly. For instance, the formation of a circle using the thumband forefinger—an open palm gesture—communicates the conceptof “OK” [32].3D gesture generation has drawn much attention in the com-munity. Early studies leveraged unimodal inputs, Dai et al. [ 10]employ audio features to drive gesture synthesis via Bi-LSTMs, andsome works incorporate GANs and VAEs to learn relevant pairsand improve synthesis quality [ 19,26,34]. However, these meth-ods encountered challenges such as gesture diversity and trainingdifficulties. On the other hand, some works also explored textualmodality, Chiu et al. [ 6] introducing the DCNF model combiningspeech, textual content, and prosody, and Yoon et al. [ 38] propos-ing an Encoder-Decoder framework. Liang et al. [ 20] introducesSEmantic Energized Generation (SEEG), a novel approach that ex-cels at semantic-aware gesture generation. Recently, multimodalmethods [1, 9, 35, 37] integrating both audio and text have gainedattention, focusing on the semantic feature encoding and long se-quence modeling of 3D human motion. Further, many works beginICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.to pay attention to the speaker’s identity [ 21,22], style [ 8,33], emo-tion [ 25,36], etc. Despite significant advances, gesture generationusing a comprehensive multimodal approach remains challenging,mainly due to the inherent trade-off between quality and diversity[33].Recently, diffusion models [ 11] have shown great potential forgenerating motions [ 7,29,41], achieving high-quality outputs whilemaintaining diversity. Hence, in this gesture generation challenge,we attempt to apply diffusion models to tackle the problem ofmultimodal gesture generation.Inspired by [ 33], we find that the diffusion model-based approachfor co-speech gesture generation surpasses other deep generativemodels of motion in terms of quality and alignment with speech,while allowing for the generation of stylized and diverse gestures.In this paper, we incorporate textual modality using the DiffuseS-tyleGesture framework and restructure the architecture. Further-more, we also refined the representations of gesture and audio, inalignment with the challenge dataset. These enhancements allowthe model to generate high-quality, speech-aligned, speaker-specificstylized, and diverse gestures with significant controllability. Wesubmitted our system to the GENEA challenge 2023 [ 16], whichaims to consolidate and compare various methods for co-speechgesture generation and evaluation, promoting the development ofnon-verbal behavior generation and its evaluation via a large-scaleuser study involving a common dataset and virtual agent.The main contributions of our paper are: (1) We propose Dif-fuseStyleGesture+, a multimodal-driven gesture generation modelwith improved input network structure, input modality and featurerepresentation, as well as the diffusion model with cross-local atten-tion. (2) The evaluation of the GENEA Challenge demonstrates thatour model is among the first tier at human-likeness, appropriate-ness for the interlocutor, and achieves competitive performance onappropriateness for agent speech. (3) The ablation study validatesthe effectiveness of our proposed denoising module. Besides, wediscuss the stylization and diversity of the generated gestures, aswell as further discussion of more technical details.2 METHODOur method is based on DiffuseStyleGesture [ 33], a recent diffusionmodel-based speech-driven gesture generation approach. Besidesseed gesture, audio and speaker ID, we also take text as an additionalinput modality. The overview of this work is shown in Figure 1.2.1 Feature ExtractionWe extract the features of the input modalities as follows:•Gesture: We used 62 joints including the fingers, and eachframe represents the motion features in terms of position,velocity, acceleration, rotation matrix, rotational angularvelocity, and rotational angular acceleration of each joint.Although there are certain relations between positions, veloc-ities, accelerations, etc., which can be transformed into eachother, representing motion features with more motion datacan lead to better performance [ 8,40]. We denote the naturalmocap gestures clip as x0∈R(Nseed+N)×[62×(9+3)×3]. ThefirstNseed frames of the gestures clip x0are used as the seed... ,dt dtxDiffuse0→(T d −1)DenoisingSamplec~(,, )d dTTx 0IDenoisingcDiffuse0→(t d -1)...Denoisingc11,xAudio Feature ExtractionNoisy gestureSeed gesture RMSpeaker ID RMConcatCross Local AttentionSelf AttentionHuber loss~( , )tx0It ~ Uniform({1,2,...,T })GestureDenoisingRPEText“I have a book... ”Feature ExtractionSdTDATGZ0ˆx0x0ˆx00,ˆx 0ˆxd dFigure 1: (Top) Denoising module. A noising step tdand anoisy gesture sequence xtat this noising step conditioningonc(including seed gesture, audio, speaker ID and text) arefed into the model. (Bottom) Sample module. At each noisingsteptd, we predict the ˆx0with the denoising process, then addthe noise to the noising step xtd−1with the diffuse process.This process is repeated from td=Tduntiltd=0.gesture and the remaining Nframes are what the modelneeds to predict based on text and audio.•Audio: More speech features also lead to better performance[4,15]. Different representations can complement each other,e.g., representations such as pitch contain rhythmic con-tent, the pre-trained model features such as WavLM [ 5]contain more complex information such as emotion, On-sets contain beat information, etc. We combine MFCC, MelSpectrum, Pitch, Energy [ 39], WavLM [ 5], and Onsets [ 2]as audio features. We denote the features of audio clip asA∈RN×(40+64+2+2+1024+1).•Speaker ID: The ID of the speaker is represented as one-hotvectors where only one element of a selected ID is nonzero.The Talking With Hands dataset has a total of 17 speakers,so the dimension of speaker ID is 17.•Text: Following [ 39], we use FastText [ 3] to obtain the 300-Dword embeddings. And we use one bit to indicate whetherthere is a laugh or not, and the last bit is set to 0 as [ 4]. Eachword is mapped to its pre-trained word embedding at word-level granularity. Then the features of text clip T∈RN×302.2.2 Gesture DenoisingUnlike text semantics-driven motion generation [ 13,29,41], theyonly need a token to contain the semantics of a sentence, whichhaven’t to be aligned with time. Gesture generation is temporallyperceptible, that is, the gestures are related to the rhythm of thespeech. So we perform linear interpolation of the extracted audioThe DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, Francefeatures Ain the temporal dimension in order to align with the ges-tures. Gestures and music-driven dance generation [ 28,30,42] arealso different. Gestures and semantics are also temporally related,for example, the hand opens when saying ’big’. As in [ 4,37], weuse frame-level aligned word vectors T.Our goal is to synthesize a high-quality and speech-matchedhuman gesture ˆxof lengthNgiven conditions cusing the diffusionmodel [ 11]. Following [ 29], we predict the signal itself instead ofpredicting noise at each noising step td. As shown in the top ofFigure 1, the Denoising module reconstructs the original gesturex0from the pure noise xt, noising step tdand conditions c.ˆx0=Denoisextd,td,c(1)wherec=[S,D,A,T]. During training, noising step tdis sampledfrom a uniform distribution of {1,2,...,T d}, with the position en-coding [ 31].xtdis the noisy gesture with the same dimension asthe real gesture x0obtained by sampling from the standard normaldistributionN(0,I).We add the information of the noising step Tdand speaker IDSto form Zand replicate and stack them into a sequence featureof lengthNseed+N. The overall attention mechanism is similar to[33], using cross-local attention [ 27], self-attention [ 31] and relativeposition encoding (RPE) [ 14]. The difference is that we conditionDin the firstNseed frames and AandTin the lastNframes, sothat the smooth transition between segments is considered in thefirstNseed frames and the corresponding gestures are generatedin the lastNframes based on audio and text, which reduce theredundancy of inputs.Then the Denoising module is trained by optimizing the Huberloss [ 12] between the generated gestures ˆx0and the real humangesturesx0:L=Ex0∼q(x0|c),td∼[1,Td][HuberLoss(x0−ˆx0)] (2)2.3 Gesture SamplingAs shown in the bottom of Figure 1, when sampling, the initial noisygesturexTis sampled from the standard normal distribution andthe otherxtd,td<Tdis the result of the previous noising step. Thefinal gesture is given by splicing a number of clips of length N. Theseed gesture for the first clip is a gesture from the dataset. Then theseed gesture for other clips is the last Nseed frames of the gesturegenerated in the previous clip. For every clip, in every noisingsteptd, we predict the clean gesture ˆx0using Equation (1) and addGaussian noise to the noising step xtd−1with the diffuse process[11]. This process is repeated from td=Tduntilx0is reached.3 EXPERIMENT3.1 Experiment SettingWe trained on all the data in the GENEA Challenge 2023 [ 16] train-ing dataset, which is based on Talking With Hands [ 18]. In thiswork, gesture data are cropped to a length of 150 frames (5 seconds,30 fps), with the first Nseed=30frames as seed gesture, and the lastN=120frames to calculate the loss between generated gesturesand real gestures in Equation (2). We use standard normalization(zero mean and unit variant) to all joint feature dimensions. TheHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95% of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.latent dimension of the attention-based encoder is 512. The cross-local attention networks use 8 heads, 48 attention channels, thewindow size is 15 frames (0.5 second), each window looks at theone in front of it, and with a dropout of 0.1. As for self-attentionnetworks are composed of 8 layers, 8 heads, and with a dropout of0.1. AdamW [ 23] optimizer (learning rate is 3 ×10−5) is used witha batch size of 200 for 1200000 samples. Our models have beentrained with Td= 1000 noising steps and a cosine noise schedule.The whole framework can be learned in about 132 hours on oneNVIDIA V100 GPU.3.2 Evaluation SettingThe challenge organizers conducted a detailed evaluation compar-ing all submitted systems [ 16]. Three proportions were evaluated:human-likeness, appropriateness for agent speech and appropri-ateness for the interlocutor. We strongly recommend the reference[16] for more details on the evaluation. The following abbreviationsare used to denote each model in the evaluation:•NA: Natural mocap (‘NA’ for ‘natural’).•BM: The official monadic baseline [ 4], a model based onTacotron 2 that takes information (WAV audio, TSV tran-scriptions, and speaker ID) from the main agent as input (‘B’for ‘baseline’, ‘M’ for ‘monadic’).•BD: The official dyadic baseline [ 4], which also take informa-tion from the interlocutor in the conversation into accountwhen generating gesture (‘D’ for ‘dyadic’).•SA–SL: 12 submissions (ours is SF) to the final evaluation(‘S’ for a submission).3.3 Evaluation Analysis3.3.1 Human-likeness. As for human-likeness, participants wereasked “Please indicate on a sliding scale how human-like the gesturemotion appears”. The rating scale from 100 (best) to 0 (worst) isanchored by partitioning the sliders into five equal-length intervalsICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.NA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...(a) Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to speechNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...(b) Appropriateness for the interlocutorFigure 3: Significant differences between conditions in thetwo appropriateness studies. White means the conditionlisted on the y-axis achieved a mean appropriateness scoresignificantly above the condition on the x-axis, black meansthe opposite ( yscored below x), and grey means no statisti-cally significant difference at level α= 0.05 after correctionfor the false discovery rate.labeled “Excellent”, “Good”, “Fair”, “Poor”, and “Bad”. Bar plots andsignificance comparisons are shown in Figure 2. The median ofour system (SF) was 65 ∈[64, 67] and the mean was 63.6 ±1.3. Andthe human-likeness was not significantly different from the systemSG [16]. This result shows that our model can generate very high-quality gestures, but somewhat lower than natural mocap, with amedian of 71∈[70, 71] and a mean of 68.4 ±1.0.3.3.2 Appropriateness for agent speech. In terms of appropriate-ness for agent speech, participants were asked “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?” Five response options areavailable, “Left is clearly better”, “Left is slightly better”, “Theyare equal”, “Right is slightly better”, and “Right is clearly better”.Table 1: Ablation studies results. ’ +’ indicates additional mod-ules and↔indicates the length of the modality in the timedimension. Bold indicates the best metric.NameFGD onfeature space↓FGD on rawdata space↓Ours 14.461 531.172+ Seed gesture↔N+ Speech↔Nseed(DiffuseStyleGesture [33])19.017 767.503+ Seed gesture↔(N+Nseed) 15.539 616.437The mean appropriateness score (MAS) of the submitted system isclose to each other, so we report significant differences as shownin Figure 8(a). Our system (SF) with a MAS of 0.20 ±0.06 and a Pref.matched (identifies how often test-takers preferred matched motionin terms of appropriateness) of 55.8%, which is significantly betterthan submitted systems SH, SL and BC. However, it has significantdeficiencies with natural mocap (NA) with a MAS of 0.81 ±0.06 anda Pref. matched 73.6% and SG.3.3.3 Appropriateness for the interlocutor. Additionally, an inter-locutor who converses with the previous main agent is added tothis user interface for scoring. Please ref to [ 16] for more details. Asfor appropriateness for the interlocutor, participants were asked “Inwhich of the two videos is the Main Agent’s motion better suitedfor the interaction?”. The response options were the same as before,i.e., “Left is clearly better”, “Left is slightly better”, “They are equal”,“Right is slightly better”, and “Right is clearly better”. We also reportsignificant differences as shown in Figure 8(b). Natural mocap (NA)with a MAS of 0.63 ±0.08 and a Pref. matched of 69.8% is signifi-cantly more appropriate for the interlocutor compared to all otherconditions. Our system (SF) with a MAS of 0.04 ±0.06 and a Pref.matched of 51.5%, which is significantly more appropriate thanconditions SG and SH, and not significantly different from otherconditions. And our system does not use interlocutor informationand (as expected) is not significantly different from chance.3.4 Ablation StudiesMoreover, we conduct ablation studies to address the performanceeffects of different architectures in our model. We use Fréchet ges-ture distance (FGD) [ 37] as the objective evaluation metric, whichis currently the closest to human perception among all objectiveevaluation metrics [ 17]. The lower FGD, the better. The FGD iscomputed using the autoencoder provided by the challenge orga-nizers. Our ablation studies, as summarized in Table 1, indicate thatwhen the input of [ 33] is used (the information of seed gestures andspeech is given directly over the full length of a training sample),both metrics perform worse; when additional seed gestures aregiven over the full length of a training sample on our model, bothmetrics also become worse. The purpose of using seed gestures[33,37] is to smooth the transition between generated segments,so they should not contain speech information and should only beconsidered at the beginning for consistency with the previouslygenerated gestures. We also learn that although the diffusion modelhas the ability to learn useful information from redundant repre-sentations, careful design of the network structure of the denoisingmodule can further improve performance.The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, France3.5 Discussion3.5.1 Takeaways. Our co-speech gesture generation model (SF),based on the diffusion model, exhibits comparable levels of human-likeness and appropriateness for the interlocutor when comparedto the best performing models (SG, SA). Furthermore, it achievescompetitive performance with the leading model (SG) in terms ofappropriateness for agent speech. These findings suggest that ourproposed model performs at a top-tier level. Our model achievesgood results due to the ability of the diffusion model to generatehigh-quality gestures and the local attention-based structure togenerate gestures that correspond to the current short durationof speech. Notably, based on the diffusion model, this can easilygenerate diverse gestures since the main part of the input is noiseand any seed gesture can be set. Moreover, based on the structure ofthe diffusion model, we add random masks to the denoising module,which enables the interpolation and extrapolation of conditionssuch as speaker identity (style), and a high degree of control overthe style intensity of the generated gestures. However, stylizationand diversity are not included as one of the evaluation dimensionsin the challenge.3.5.2 Limitation. Our model does not consider the information ofthe interlocutor, this is also not significantly different from a randomselection. Taking into account information about the interlocutoris important in the interaction, and this is a direction for futureresearch. Moreover, pre-processing the data should make the resultsbetter. We do not do anything special with motions that do notinclude movement in the hand and still train with its hand, whichcan lead to poorer hand results. For the exploration of the datasetand more discussion, please refer to the Appendix.3.5.3 More Discussion. We also tried to add the BEAT [ 21] dataset(all of them / some of the speakers) to train together with TalkingWith Hands, but we got worse results, the model didn’t converge.We guess the possible reason is that the BEAT dataset is very large,and the diffusion model needs more time to be trained well.Although we did not consider interlocutors, in terms of appro-priateness for the interlocutor, our system (SF) is significantly moreappropriate than SG and SH, and not significantly different fromother conditions. It is worth noting that SG is the best-performingmodel on the first two dimensions of the evaluation. We suspectthat the reason for this is related to the setting of the evaluation,cause “segments should be more or less complete phrases” in theevaluation. However, the evaluation during silence is equally impor-tant, and the model should learn the behavior from the data whennot talking, such as idling and other small gestures, and no otherunexpected actions. Although we did not consider the informationof interlocutors, it is impressive that our model is able to remainidle while the other person is talking (when he/she is not talking).The diffusion model takes a long time to train and inference.The evaluation was performed using 8-10 seconds of speech, andlonger speech evaluation results may be more consistent with hu-man perception. When the number of participants in the speechappropriateness evaluation was 448, there was no difference be-tween our system (SF) and SG; when the number of participantsin the evaluation was increased to 600, SG was significantly betterthan all of the submitted systems, which suggests the differences(a) A gesture indicating largeness. (b) A pointing gesture.(c) A thinking gesture.Figure 4: Case study of generated gestures. The right side ofeach figure shows the generated gestures.between the two systems were relatively small and needed to bestatistically significant until a large number of subjects had beenrecruited and evaluated after FDR correction.3.5.4 Case Study. Our diffusion-based method can extract seman-tic information and generate human-like gestures. For instance,when the speaker says “large”, our system generates a gesture indi-cating largeness. When the speaker asks “Where do you stay?” oursystem generates a pointing gesture, mimicking human behavior.Our diffusion-based models can generate incidental actions forlaughter and surprise. For example, when the speaker laughs, themodel generates a body shake, mimicking human laughter. Whenthe speaker is thinking, the model generates a corresponding think-ing action. This suggests that diffusion-based models can learnsemantics and synthesize semantic actions in specific situations.4 CONCLUSIONIn this paper, we propose DiffuseStyleGesture+, a diffusion modelbased method for speech-driven co-gesture generation. Based onthe DiffuseStyleGesture framework, we add text modality and thenmore logically designed the input architecture of the modality,while tuning the representation of gesture and audio according tothe challenge dataset to be able to generate high-quality, speech-matched, speaker-specific stylized, and diverse gestures and tobe highly controllable on these conditions. The proposed modelis in the first tier in human-likeness and appropriateness for theinterlocutor, with no significant difference from the best model,and achieves competitive performance with the best model onappropriateness for agent speech, showing the effectiveness ofthe proposed method. However, compared with the natural mocap,there is still much room for improvement worth further exploration.ACKNOWLEDGMENTSThis work is supported by National Natural Science Foundationof China (62076144), Shenzhen Science and Technology Program(WDZC20200818121348001) and Shenzhen Key Laboratory of nextgeneration interactive media innovative technology (ZDSYS2021062-3092001004).ICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.
ILmSIRnlQ
good results by diffusion based method
10: Top 5% of accepted papers, seminal paper
The paper described an updated method of previously proposed diffusion-based gesture generation model. The method improved input network structures, input feature representation, and cross-local attention mechanism of denoising model. The paper is well organized and written. The paper properly refers recent works of gesture generation by data-driven approach. The technical descriptions are well written and the experiments would be reproducible. The paper updated only minor points but it is valuable to see the results of applying the diffusion based method to this dataset.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
Mm44wlJICIj
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation
["Gwantae Kim", "Yuanming Li", "Hanseok Ko"]
This paper describes a diffusion model for co-speech gesture generation presented by KU-ISPL entry of the GENEA Challenge 2023. We formulate the gesture generation problem as a co-speech gesture generation problem and a semantic gesture generation problem, and we focus on solving the co-speech gesture generation problem by denoising diffusion probabilistic model with text, audio, and pre-pose conditions. We use the U-Net with cross-attention architecture as a denoising model, and we propose a gesture autoencoder as a mapping function from the gesture domain to the latent domain. The collective evaluation released by GENEA Challenge 2023 shows that our model successfully generates co-speech gestures. Our system receives a mean human-likeness score of 32.0, a preference-matched score of appropriateness for the main agent speech of 53.6%, and an interlocutor speech appropriateness score of 53.5%.We also conduct an ablation study to measure the effects of the pre-pose. By the results, our system contributes to the co-speech gesture generation for natural interaction.
["GENEA Challenge", "co-speech gesture generation", "diffusion", "neural networks", "generative models"]
ABSTRACTThis paper describes a diffusion model for co-speech gesture genera-tion presented by KU-ISPL entry of the GENEA Challenge 2023. Weformulate the gesture generation problem as a co-speech gesturegeneration problem and a semantic gesture generation problem,and we focus on solving the co-speech gesture generation prob-lem by denoising diffusion probabilistic model with text, audio,and pre-pose conditions. We use the U-Net with cross-attentionarchitecture as a denoising model, and we propose a gesture au-toencoder as a mapping function from the gesture domain to thelatent domain. The collective evaluation released by GENEA Chal-lenge 2023 shows that our model successfully generates co-speechgestures. Our system receives a mean human-likeness score of 32.0,a preference-matched score of appropriateness for the main agentspeech of 53.6%, and an interlocutor speech appropriateness scoreof 53.5%. We also conduct an ablation study to measure the effects ofthe pre-pose. By the results, our system contributes to the co-speechgesture generation for natural interaction.CCS CONCEPTS•Computing methodologies →Animation ;•Human-centeredcomputing→Human computer interaction (HCI) .KEYWORDSGENEA Challenge, co-speech gesture generation, diffusion, neuralnetworks, generative modelsACM Reference Format:Gwantae Kim, Yuanming Li, and Hanseok Ko. 2023. The KU-ISPL entryto the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesturegeneration. In Proceedings of 25th ACM International Conference on Mul-timodal Interaction (ICMI’23). ACM, New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONSynthesizing synchronized and human-like gestures performs cru-cial roles to improve immersion, engagement, and naturalness forembodied virtual agents and humanoid robots. During the human-computer interaction(HCI) process, human uses both verbal andPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected]’23, October 09–13, 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXnon-verbal expressions to provide their intent to the interlocutor.Gesture generation, which is one of the main challenges for non-verbal interaction, aims to synthesize natural-looking and mean-ingful human gestures. The task can be separated whether verbalexpression exists or not. When verbal expressions, such as audioor text, are given, the gesture generation model focuses on makinggestures that emphasize the meaning of verbal expressions. In theother case, the model should generate gestures that deliver theintent whether verbal expressions are given or not. We define thetask with verbal information as co-speech gesture generation andthe task that focuses on synthesizing meaningful body motions thatdeliver intent as semantic gesture generation. In this research, wefocus on generating high-fidelity co-speech gestures.There are many challenges for the co-speech gesture generation.The first is timing synchronization. Since the speech and gesturesare shown to the interlocutor sequentially, he or she will be con-fused if gestures depart from speech. For example, if the start andend timing of the gestures slightly differs from speech, the userswill think that it is an implemental error. A more detrimental situa-tion is traffic jams during continuous generation. Once the timingis out of sync, the timing between speech and gestures is continu-ally departed and the discomfort will be gradually increased. Withsimilar thinking, semantic synchronization, which is the secondchallenge, is also important to deliver proper intent. For example,when people say "I disagree." by nodding, the interlocutor will beconfused that it is positive or negative.The third obstacle is noise robustness. 3D pose estimation or mo-tion capture is utilized to acquire gesture data. However, the qualityof raw data obtained by 3D pose estimation is not enough becausethe algorithm is basically image-to-3D reconstruction, which is aone-to-many problem. The motion capture is better, but it is too ex-pensive and time-consuming. To secure quality, the cost is increasedexponentially. Therefore, the raw data may contain noise. Sincetraining with noisy data hurts both quantitative and qualitativeperformance, a workaround such as pre-processing or noise-robusttraining is needed.To tackle these problems, deep learning-based approaches havebeen applied to generating co-speech gestures, recently. There arethree types of training strategies: reconstruction-based method[ 15,18,34], generative adversarial network(GAN)[ 8] based method[ 25,33], and diffusion[ 7,12] based methods[ 3,5,38]. The reconstruction-based co-speech gesture generation methods directly estimate ges-tures from text or audio. Although the methods induce reasonableresults in terms of joint error, disadvantages are seen in terms ofdiversity. To generate various results without quantitative perfor-mance degradation, GAN-based co-speech gesture generation mod-els are trained by controlling the weight between reconstructionICMI’23, October 09–13, 2023, Paris, France Kim et al.loss and adversarial loss. Recently, denoising diffusion probabilis-tic models(DDPMs) are achieving huge success in the generativemodel and computer vision fields and expanding to other researchfields[ 14,24]. Especially, the diffusion model could synthesize vari-ous images that reflect input conditions, even if its semantic space islarge. Since the semantic space of the speech for co-speech gesturegeneration is large, the diffusion model may help to synthesizevarious and synchronized results. Therefore, the goal of the paperis to find a suitable diffusion model structure for co-speech gesturegeneration.In this paper, we propose a diffusion-based co-speech gesturegeneration method. We establish a gesture autoencoder to projectfrom gesture space to feature space and vice versa. The model wasconfigured to select suitable features according to the characteristicsof the gesture data. We also present how to deliver audio and textinformation to the diffusion model. We use validated audio featuresand the pre-trained language model to provide rich features.The data and evaluations are provided by GENEA Challenge2023[ 20]. Thanks to the good-quality data, the noise robustnessproblem is under control and we can focus on the synchronizingproblems. The evaluations, which contain human likeness, andappropriateness for the main agent and interlocutor, are also well-formulated to measure the generation performance. The code isavailable here1.2 RELATED WORKS2.1 Co-speech gesture generationKucherenko et al. [ 18] proposed an autoencoder-style audio-to-gesture model with hidden representation learning. The methodfirst find hidden embedding space of gesture by autoencoder andnext train the audio encoder to find joint embedding space betweenaudio and gesture. Yoon et al. [ 34] trained the sequence-to-sequenceLSTM model to map text transcriptions to 2D co-speech gestures.Kim et al. [ 15] trained the transformer-based autoencoder withself-supervised pre-training. These approaches use reconstructionloss to optimize the model. Chang et al.[ 4] presented a locality con-straint attention-based gesture generation model, which is inspiredby Tacotron2. StyleGestures [ 1] uses the method of normalizingflow to generate gestures from speech. Audio2Gestures [ 22] syn-thesize gestures using a variational autoencoder. Yoon et al. [ 33]train the model with adversarial loss and reconstruction loss togenerate gestures from trimodal contexts. HA2G [ 25] adopts ahierarchical decoder to address the structural information of thejoint. Gesturemaster[ 37] used a rhythm embedding module, styleembedding module, motion graph construction, and graph-basedoptimization to extract features and generate gestures.2.2 Semantic gesture generationKim et al. [ 16] generates gestures with the semantics itself or ex-tracted from text. The method with an intent classifier emphasizesco-speech gesture generation. The co-speech gesture model is se-lected to generate gestures if the intent is unclear, else this method1https://github.com/GT-KIM/GENEA2023-KU-ISPLis used to synthesize gestures. SEEG [ 23] generates semantic en-ergized co-speech gestures with the semantic prompt gallery, se-mantic prompter, and semantic energized learning. Gesticulator[ 19]synchronizes between text and audio features in the encoding phaseand generates gestures by autoregression.2.3 Diffusion-based motion generationAlexanderson et al. [ 2] proposed conformer[ 10]-based diffusionmodels for gesture generation, dance synthesis, and path-drivenlocomotion. Zhu et al. [ 38] migrated the diffusion model to speech-driven co-speech gesture generation with diffusion gesture stabi-lizer and implicit classifier-free guidance. FLAME [ 17] generatesand edits human motion with the pre-trained language model andtransformer. Motiondiffuse[ 36] and MDM[ 30] also synthesize hu-man motions from text descriptions. [ 3] learns a gesture-transcriptjoint embedding space using contrastive learning. The learned em-beddings are incorporated into the diffusion model via an adaptiveinstance normalization layer. [ 5] synthesize motions by diffusionmodel using latent space. The motion representations are projectedinto latent space, diffused, and reconstructed to the original motionspace.3 CO-SPEECH GESTURE GENERATIONMODELFigure 1 depicts an overview of the proposed model to generatehigh-fidelity co-speech gestures. In this section, we first introducethe problem formulation of co-speech gesture generation (Section3.1). We propose the gesture autoencoder, which is designed toproject gesture space to feature space (Section 3.2). We then presentthe classifier-free guidance for applying speech conditions to co-speech gestures (Section 3.3). Furthermore, we establish the forwarddiffusion and the reverse conditional generation process in featurespace (Section 3.4).3.1 Problem FormulationThe co-speech gesture training data often consist of 3D pose se-quence x, audio a, text(sentence) s, and metadata. The generativemodel G parameterized by θis optimized to synthesize x, whichis further conditioned on the audio a, text s, and the pre-definedinitial poses x−1of the M frames. The learning objective of theproblem can be formulated as argminθ||x−Gθ(a,s,x−1)||.However, samples in the training data often have a long dura-tion. To reduce the computational cost and memory usage, everymodality of the sample is cropped into segments x={x1,...,xi},a={a1,...,ai}, and s={s1,...,si}, where xihas N frames andai,sihave the same time length as xi. Now the generative model Gestimates xifrom the audio ai, text si, and the M pose frames fromprevious segment x(N−M):Ni−1, instead of synthesizing xat once. Fi-nally, the generative model G synthesizes the gestures {x1,...,xi}continuously.The model is autoregressive because the poses generated by theprevious segment are used to synthesize the current segment, andstochastic because the initial diffusion feature map is random noise.The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceFigure 1: Overview of the proposed diffusion-based co-speechgesture generation method. The model is autoregressive andprobabilistic. For the N-th generation, audio, text, and pre-poses are projected to the latent space and used to conditions.The initialized Gaussian noise is iteratively diffused by thereverse process. The output latent vector is reconstructed tothe gesture space by the decoder.3.2 Gesture AutoencoderIn the Stable Diffusion[ 28], the latent diffusion model provides flex-ible, computationally tractable, and sometimes achieving qualityimprovement. The gesture autoencoder focus on finding good latentembedding space projected from gesture space. The gesture autoen-coder consists of two autoencoder models: pose autoencoder andmotion autoencoder. Since the gesture is the sequential pose data,we design the pose autoencoder for projecting the raw pose spaceto latent space, and the motion autoencoder to find correlationsalong the time axis.The pose encoder and decoder consist of 3 fully-connected layerswith dropout[ 29] and GELU activation function[ 11] each. The inputposes sequence xN×3Jis projected to z′N×Dby the pose encoder,where z′denotes mid-level hidden representation, J is the number ofjoints, and D is the dimension of z′, and the pose decoder performsreverse projection. The pose autoencoder is first trained with L1reconstruction loss. Once the pose autoencoder is optimized, theparameters are frozen in the rest training stages such as diffusiontraining stage.The motion autoencoder aims to capture sequential informationof the data. Thus, the motion encoder and decoder consist of 3gated recurrent units(GRU) layers[ 6] and 3 multi-head self-attentionlayers[ 31], which have strong capacity in sequential data modeling.The motion encoder is formulatedz=MHSA(GRU(z′)) (1)whereMHSA(X)=Attention(X,X,X). The attention mechanismisAttention(Q,K,V)=softmax(QKT√d)·V (2)where Q, K, and V are the query, key, and value from the featurematrix, d is the channel dimension, and T is the matrix transposeoperation.The mid-level hidden representation z′N×Dis projected to zN×Dby the motion encoder, where zdenotes hidden representation infeature space, and the motion decoder performs reverse projection.The motion autoencoder is individually trained with L1 reconstruc-tion loss. The parameters of the motion autoencoder are also frozenafter this training stage.3.3 ConditioningThe diffusion models are theoretically capable of modeling the con-ditional distribution p(z|y). This can be implemented with a condi-tional denoising autoencoder εθ(zt,t,y), wherey∈{a,s,zi−1}, toaddress the generation process through inputs y. To combine condi-tional information and latent vector in the U-Net backbone, we usea cross-attention mechanism, which is used in Stable Diffusion[ 28].The three modalities, which are audio, text, and pre-pose, areused as conditions in the diffusion process. The pre-processed audiofeatures, text features, and pre-pose features are projected to theembedding vectors by fully-connected layers. These three embed-ding vectors are added to the time embedding vector and propagatethe information of each modality to the denoising U-Net model.3.4 DiffusionDDPMs define the latent variable models of the form pθ(x0)=∫pθ(x0:T)dx1:T, wherex1:Tare latent variables in the same samplespace asx0with the same dimensionality.The forward process, which is also called the diffusion process,approximates the posterior distribution q(x1:T|x0)by the Markovchain that gradually adds Gaussian noise to the data according tothe variance schedule β1,...,βT:q(x1:T|x0)=TÖt=1q(xt|xt−1), (3)whereq(xt|xt−1)=N(xt;√︁1−βtxt−1,βtI). (4)The forward process variances βtcan be learned by reparameteri-zation or held constant as hyperparameters. Since our model usesgesture autoencoder for mapping from pose to latent embeddings,the latent embeddings are gradually corrupted by noise, whichfinally leads to a pure white noise when T goes to infinity. There-fore, the prior latent distribution of p(xT)isN(xt;0,I)with onlyinformation of Gaussian noise.The reverse process estimates the joint distribution of pθ(x0:T).It is defined as a Markov chain with learned Gaussian transitionsICMI’23, October 09–13, 2023, Paris, France Kim et al.starting atN(xt;0,I):pθ(x0:T=p(xT)TÖt=1pθ(xt−1|xt), (5)wherepθ(xt−1|xt)=N(xt−1;μθ(xt,t),Σθ(xt,t)). (6)The corrupted noisy latent embedding xtis sampled by q(xt|x0)=N(xt;√ ̄αtx0,(1− ̄αt)I), whereαt=1−βtand ̄αt=Îts=1αs.Since the problem is co-speech gesture generation, which isa conditional generation problem, we have to provide additionalinputs a,s, and zi−1to the model. Therefore, these conditions areinjected into the generation process. The reverse process of eachtimestep can be updated for our problem as:pθ(zt−1|zt,y)=N(xt−1;μθ(zt,t,y),βtI). (7)The reverse process is started by sampling a Gaussian noiseztN(0,I)and following the Markov chain to iteratively denoisethe latent variable xtvia Eq. 7 to get the original latent vector z0.The variational lower bound on negative log-likelihood is usedto optimize the diffusion model. We follow [ 12] to simplify thetraining objective to the ensemble of MSE losses as:L(θ)=Et,x0,ε[||ε−εθ(√ ̄αtx0+√1− ̄αtε,y,t)||2], (8)where t is uniformly sampled between 1 and T, and εis initializedasN(0,I). The diffusion model is trained by the gradient descentsteps on Eq. 8 until converged.4 EXPERIMENT4.1 Data ProcessingWe trained our model using the GENEA Challenge 2023 dataset[ 35],derived from the Talking with hands 16.2M dataset[ 21]. This datasetcomprises a training set containing 371 clips, a validation set with40 clips, and a test set encompassing 70 clips. Each clip consistsof audio recordings, transcriptions, gesture motions for the mainagent, gesture motions for the interlocutor, and associated metadata.The audio data possesses a sampling rate of 44100Hz. The gesturemotions are formatted in BVH (Biovision Hierarchy) format, andtheir frame rate is set at 30 frames per second (FPS).Our system exclusively utilizes audio and text data from the mainagent, disregarding the interlocutor’s information and metadata. Weextract the mel-spectrogram, mel-frequency cepstrum coefficients,and prosody features using n-fft=4096 and a hop length of 33ms.To extract audio features, we employed the Librosa[ 27] packageand the Parselmouth[ 13] library. The network output comprisesjoint angles relative to a T-pose, with these angles parameterizedusing the exponential map[ 9]. Each dimension is normalized tohave a mean of zero and a standard deviation of one across theofficial challenge training set. We selected a total of 26 joints forfull-body expression. Subsequently, we apply a Savitzky-Golayfilter[ 26] with a window length of 9 and a polynomial order of 3 tosmooth the generated gestures. For text segmentation, we employ apre-trained text embedding model[32], featuring 1024 dimensionsper sentence. We opted for sentence embedding due to its capacityto capture semantic information in contrast to word embeddings.Given that the audio, text, and gesture data are temporally aligned,Table 1: Detailed hyperparameters settingHyperparameter Value# of joints (J) 26# of pre-pose frames (M) 8# of frames of the segment (N) 128Denoising diffusion steps 1000Feature dimension (D) 128Condition vector dimension 512# of residual blocks per up/downsampling layer 2# of up/downsampling layers 4# of attention heads 4N-FFT 4096Hop length [ms] 33Text embedding dimension 1024optimizer AdamWlearning rate 1e-4batch size 8the timing of audio features, text embeddings, and pose sequencesare synchronized.5 DISCUSSIONIn this section, we provide some discussions about evaluation re-sults. The submitted co-speech gestures are measured by threeaspects: human likeness, appropriateness for agent speech, andappropriateness for the interlocutor. The natural motion, monadicbaseline, and dyadic baseline are labeled NA, BM, and BD, respec-tively. Our submitted entry name is named SA. Our gesture gener-ation system is tested on a Windows 10 desktop with a 3.20GHzi9-12900K CPU, 128GB RAM, and one RTX 3090 GPU.5.1 Human-likenessThe results of the evaluation are presented in Table 2 and Figure 2.Our submitted system achieves a median human-likeness score of30 and a mean human-likeness score of 32.0. A disparity in humanlikeness is observed between our entry and natural motions. One ofthe significant contributing factors to this phenomenon is the lackof structural information. By not capturing the interdependenciesamong joints, our model generates gestures with a predominantemphasis on arm movements, which tend to exhibit greater motioncompared to head or body joints. Since the movement of the centerof gravity of the agent is ignored by the above reason, the humanlikeness score may decrease. Furthermore, our system omits fingermotions from its generation process. Another conceivable concernis the effectiveness of smoothing techniques. Despite the applicationof a smoothing filter, the motions produced by our system some-times appear to lack smoothness. Potential factors contributing tothese results encompass suboptimal optimization of the smoothingfilter and an insufficient number of pre-pose instances.5.2 AppropriatenessIn respect to the appropriateness of speech exhibited by main agent,Table 3 and Figure 4 provide a description indicating that our entryachieves a preference-matching score of 54.8%. The outcomes ofThe KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceTable 2: Summary of the collective perception study with a0.05 confidence interval about human-likeness. Our entry isSA.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualizing the rating distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are themean ratings (also with a 0.05 confidence interval). Box edgesare at 25 and 75 percentiles, while whiskers cover 95% of allratings for each condition.the assessment, which focuses on the appropriateness of interlocu-tor speech, are displayed in Table 4 and Figure 6. Our developedsystem attains a preference-matching score of 53.5%. We present aconcise overview of several configurations within our experimen-tal framework, which we posit may contribute to enhancing theappropriateness of gestures about speech. One potential rationalewe identify pertains to semantic conditioning. Our system employs...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between con-ditions. White means that the condition listed on the y-axisrated significantly above the condition on the x-axis, blackmeans the opposite ( yrated below x), and grey means nostatistically significant difference at the level α=0.05afterHolm-Bonferroni correction.a pre-trained sentence embedding model without fine-tuning. How-ever, numerous textual segments in the data fail to adhere to propersentence structure. Consequently, the embedding might inaccu-rately convey the semantics of these text segments. To mitigate thisconcern, we will change the sentence embedding model to a wordembedding model, or utilization of extended segments.Furthermore, timing synchronization is a consideration. Giventhat our system incorporates speech features such as mel-spectrogram,MFCC, and prosody to extract temporal information from audio,the model learns to effectively synchronize audio with gestures.Additionally, the pre-pose condition aids in capturing the initia-tion timing. Consequently, the proposed model demonstrates thecapability to regulate the timing of speech onset and pauses.Moreover, we address the issue of gesture smoothness. The gener-ated gesture results from our system sometimes exhibit irregularity.We hypothesize that the phenomenon may be attributed to thearchitecture of the pose autoencoder, the pre-poses, and the extentof the smoothing filter employed. A more intricate exploration ofthese factors will be conducted in the ablation study section.We propose potential methods for enhancing the performanceof our system concerning both the main agent and interlocutorspeech appropriateness. Initially, the model could incorporate inter-locutor gestures, audio, and text as conditioning factors. Secondly,incorporating a more extensive history of features from both themain agent and interlocutor into the conditioning process mightyield improved gesture generation. Thirdly, the meticulous designof the text embedding model and gesture autoencoder could en-hance semantic conditioning and the inherent naturalness of thegenerated gestures, respectively. These specific aspects will be thefocal points of our future works.ICMI’23, October 09–13, 2023, Paris, France Kim et al.Table 3: Summary statistics of user-study responses fromappropriateness for main agent speech, with confidence in-tervals for the mean appropriateness score(MAS) at the levelα=0.05."Pref. matched" identified how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualizing the response distribution in theappropriateness for main agent speech. The blue bar(bottom)represents responses where subjects preferred the matchedmotion, the light grey bar(middle) represents tied responses,and the red bar(top) represents responses preferring mis-matched motion, with the height of each bar being propor-tional to the fraction of each category. Lighter colors corre-spond to slight preference, and darker colors to clear prefer-ence. On top of each bar is also a confidence interval for themean appropriateness score, scaled to fit the current axes.The dotted black line indicates chance-level performance.5.3 ablation studyWe conduct an ablation study to ensure that autoregression is help-ful to co-speech gesture synthesis. We calculate Frechet GestureDistance(FGD), between ground truth and generated motions in theNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Significant differences between conditions in theappropriateness for main agent speech. White means thecondition listed on the y-axis achieved a MAS significantlyabove the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significantdifference at level α=0.05after correction for the false dis-covery rate.Table 4: Summary statistics of user-study responses fromappropriateness for interlocutor speech, with confidence in-tervals for the mean appropriateness score(MAS) at the levelα=0.05."Pref. matched" identified how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014validation set, which are shown in Table 5. As a result, the FGD ofdiscriminator features and raw gestures are improved when usingthe pre-pose condition.The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 6: Bar plots visualizing the response distributionin the appropriateness for interlocutor speech. The bluebar(bottom) represents responses where subjects preferredthe matched motion, the light grey bar(middle) representstied responses, and the red bar(top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of each category. Lightercolors correspond to slight preference, and darker colors toclear preference. On top of each bar is also a confidence in-terval for the mean appropriateness score, scaled to fit thecurrent axes. The dotted black line indicates chance-levelperformance.NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 7: Significant differences between conditions in theappropriateness for interlocutor speech. White means thecondition listed on the y-axis achieved a MAS significantlyabove the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significantdifference at level α=0.05after correction for the false dis-covery rate.6 CONCLUSIONIn this paper, we introduce an innovative diffusion-based co-speechgesture generation framework that has been submitted to the GE-NEA Challenge 2023. Our approach aims to produce co-speechTable 5: Effects of autoregression.Model FGD(feature) FGD (raw)w/o. pre-pose 154.984 4977.059w. pre-pose 77.909 2279.612gestures of high fidelity, achieved by proposing a gesture autoen-coder for effective domain transfer between the gesture space andlatent feature space. Furthermore, we leverage denoising diffusionprobabilistic models to address the challenge of co-speech ges-ture generation. While the comprehensive results indicate that ourmethod achieves a preference-matching score of 54.8% and 53.5%for appropriateness of main agent speech and interlocutor speech,respectively.Moreover, we conduct an in-depth ablation stud to affirm theutility of autoregressive methods in co-speech gesture synthesis.Our conclusion highlights the strengths of our system in timing syn-chronization and the generation of contextually fitting gestures forinteractive scenarios. Additionally, we propose several forthcomingchallenges for research, such as refining the structures of semanticembeddings and gesture embedding models. Our hope is that ourapproach contributes not only to the advancement of diffusion-based gesture generation research but also finds application acrossvarious gesture generation domains.ACKNOWLEDGMENTSThis work was supported by the "Development of cognitive/responseadvancement technology for AI avatar commercialization" projectfunded by the Brand Engagement Network(BEN)[Q2312881].
nLeeup58b_V
The paper propose a diffusion model architecture using latent diffusion and autoregressive training for co-speech gesture synthesis. While the evaluation results for human-likeness are not great, it produces the best result in appropriateness for interlocutor speech. Given the design choices and evaluation results, the paper will provide useful insights for the workshop and challenge attendees.
7: Good paper, accept
## Summary The paper propose a diffusion-based method for synthesizing co-speech gestures. It utilizes an autoencoder to project the raw gesture motions into the latent space. The diffusion process is trained within the latent space based on the speech and text input conditions. Although the method did not produce very good results in the human-likeness study, it produces the best results in appropriateness for interlocutor speech. ## Strength The proposed diffusion model utilizes the latent space and is trained in an autoregressive manner. The model is novel and shows good potential to produce high quality gestures. The autoregressive framework also shows to produce better FGD than model trained without pre-poses. While the subjective results in human-likeness are not great, the model performs the best in appropriateness for interlocutor speech. The method is described in good details with all hyperparameter information for reproducibility. The overall paper exposition is clear and easy to follow. ## Weakness The following papers that applied latent diffusion model in motion synthesis should also be cited and discussed : Ao, Tenglong, Zeyi Zhang, and Libin Liu. "GestureDiffuCLIP: Gesture diffusion model with CLIP latents." _arXiv preprint arXiv:2303.14613_ (2023). Chen, Xin, et al. "Executing your Commands via Motion Diffusion in Latent Space." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2023. Some training details could be better discussed. For example, it is not obvious if the latent space autoencoder model is trained in an earlier stage or jointly with the diffusion model. In addition to the data processing hyperparameters, the parameters used in training will also be very helpful for replication. ## Justification of Rating The paper propose a diffusion model architecture for co-speech gesture synthesis. It utilizes the latent space learning and train the model in an autoregressive manner. The ablation study shows the improvements in FGD with pre-poses to justify the design choice. While the evaluation results for human-likeness are not great, it produces the best result in appropriateness for interlocutor speech. Given a few other concurrent works utilizing diffusion models for motion synthesis, the paper will provide interesting comparison and discussion about the design choices for the workshop and challenge attendees.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Mm44wlJICIj
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation
["Gwantae Kim", "Yuanming Li", "Hanseok Ko"]
This paper describes a diffusion model for co-speech gesture generation presented by KU-ISPL entry of the GENEA Challenge 2023. We formulate the gesture generation problem as a co-speech gesture generation problem and a semantic gesture generation problem, and we focus on solving the co-speech gesture generation problem by denoising diffusion probabilistic model with text, audio, and pre-pose conditions. We use the U-Net with cross-attention architecture as a denoising model, and we propose a gesture autoencoder as a mapping function from the gesture domain to the latent domain. The collective evaluation released by GENEA Challenge 2023 shows that our model successfully generates co-speech gestures. Our system receives a mean human-likeness score of 32.0, a preference-matched score of appropriateness for the main agent speech of 53.6%, and an interlocutor speech appropriateness score of 53.5%.We also conduct an ablation study to measure the effects of the pre-pose. By the results, our system contributes to the co-speech gesture generation for natural interaction.
["GENEA Challenge", "co-speech gesture generation", "diffusion", "neural networks", "generative models"]
ABSTRACTThis paper describes a diffusion model for co-speech gesture genera-tion presented by KU-ISPL entry of the GENEA Challenge 2023. Weformulate the gesture generation problem as a co-speech gesturegeneration problem and a semantic gesture generation problem,and we focus on solving the co-speech gesture generation prob-lem by denoising diffusion probabilistic model with text, audio,and pre-pose conditions. We use the U-Net with cross-attentionarchitecture as a denoising model, and we propose a gesture au-toencoder as a mapping function from the gesture domain to thelatent domain. The collective evaluation released by GENEA Chal-lenge 2023 shows that our model successfully generates co-speechgestures. Our system receives a mean human-likeness score of 32.0,a preference-matched score of appropriateness for the main agentspeech of 53.6%, and an interlocutor speech appropriateness scoreof 53.5%. We also conduct an ablation study to measure the effects ofthe pre-pose. By the results, our system contributes to the co-speechgesture generation for natural interaction.CCS CONCEPTS•Computing methodologies →Animation ;•Human-centeredcomputing→Human computer interaction (HCI) .KEYWORDSGENEA Challenge, co-speech gesture generation, diffusion, neuralnetworks, generative modelsACM Reference Format:Gwantae Kim, Yuanming Li, and Hanseok Ko. 2023. The KU-ISPL entryto the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesturegeneration. In Proceedings of 25th ACM International Conference on Mul-timodal Interaction (ICMI’23). ACM, New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONSynthesizing synchronized and human-like gestures performs cru-cial roles to improve immersion, engagement, and naturalness forembodied virtual agents and humanoid robots. During the human-computer interaction(HCI) process, human uses both verbal andPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected]’23, October 09–13, 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXnon-verbal expressions to provide their intent to the interlocutor.Gesture generation, which is one of the main challenges for non-verbal interaction, aims to synthesize natural-looking and mean-ingful human gestures. The task can be separated whether verbalexpression exists or not. When verbal expressions, such as audioor text, are given, the gesture generation model focuses on makinggestures that emphasize the meaning of verbal expressions. In theother case, the model should generate gestures that deliver theintent whether verbal expressions are given or not. We define thetask with verbal information as co-speech gesture generation andthe task that focuses on synthesizing meaningful body motions thatdeliver intent as semantic gesture generation. In this research, wefocus on generating high-fidelity co-speech gestures.There are many challenges for the co-speech gesture generation.The first is timing synchronization. Since the speech and gesturesare shown to the interlocutor sequentially, he or she will be con-fused if gestures depart from speech. For example, if the start andend timing of the gestures slightly differs from speech, the userswill think that it is an implemental error. A more detrimental situa-tion is traffic jams during continuous generation. Once the timingis out of sync, the timing between speech and gestures is continu-ally departed and the discomfort will be gradually increased. Withsimilar thinking, semantic synchronization, which is the secondchallenge, is also important to deliver proper intent. For example,when people say "I disagree." by nodding, the interlocutor will beconfused that it is positive or negative.The third obstacle is noise robustness. 3D pose estimation or mo-tion capture is utilized to acquire gesture data. However, the qualityof raw data obtained by 3D pose estimation is not enough becausethe algorithm is basically image-to-3D reconstruction, which is aone-to-many problem. The motion capture is better, but it is too ex-pensive and time-consuming. To secure quality, the cost is increasedexponentially. Therefore, the raw data may contain noise. Sincetraining with noisy data hurts both quantitative and qualitativeperformance, a workaround such as pre-processing or noise-robusttraining is needed.To tackle these problems, deep learning-based approaches havebeen applied to generating co-speech gestures, recently. There arethree types of training strategies: reconstruction-based method[ 15,18,34], generative adversarial network(GAN)[ 8] based method[ 25,33], and diffusion[ 7,12] based methods[ 3,5,38]. The reconstruction-based co-speech gesture generation methods directly estimate ges-tures from text or audio. Although the methods induce reasonableresults in terms of joint error, disadvantages are seen in terms ofdiversity. To generate various results without quantitative perfor-mance degradation, GAN-based co-speech gesture generation mod-els are trained by controlling the weight between reconstructionICMI’23, October 09–13, 2023, Paris, France Kim et al.loss and adversarial loss. Recently, denoising diffusion probabilis-tic models(DDPMs) are achieving huge success in the generativemodel and computer vision fields and expanding to other researchfields[ 14,24]. Especially, the diffusion model could synthesize vari-ous images that reflect input conditions, even if its semantic space islarge. Since the semantic space of the speech for co-speech gesturegeneration is large, the diffusion model may help to synthesizevarious and synchronized results. Therefore, the goal of the paperis to find a suitable diffusion model structure for co-speech gesturegeneration.In this paper, we propose a diffusion-based co-speech gesturegeneration method. We establish a gesture autoencoder to projectfrom gesture space to feature space and vice versa. The model wasconfigured to select suitable features according to the characteristicsof the gesture data. We also present how to deliver audio and textinformation to the diffusion model. We use validated audio featuresand the pre-trained language model to provide rich features.The data and evaluations are provided by GENEA Challenge2023[ 20]. Thanks to the good-quality data, the noise robustnessproblem is under control and we can focus on the synchronizingproblems. The evaluations, which contain human likeness, andappropriateness for the main agent and interlocutor, are also well-formulated to measure the generation performance. The code isavailable here1.2 RELATED WORKS2.1 Co-speech gesture generationKucherenko et al. [ 18] proposed an autoencoder-style audio-to-gesture model with hidden representation learning. The methodfirst find hidden embedding space of gesture by autoencoder andnext train the audio encoder to find joint embedding space betweenaudio and gesture. Yoon et al. [ 34] trained the sequence-to-sequenceLSTM model to map text transcriptions to 2D co-speech gestures.Kim et al. [ 15] trained the transformer-based autoencoder withself-supervised pre-training. These approaches use reconstructionloss to optimize the model. Chang et al.[ 4] presented a locality con-straint attention-based gesture generation model, which is inspiredby Tacotron2. StyleGestures [ 1] uses the method of normalizingflow to generate gestures from speech. Audio2Gestures [ 22] syn-thesize gestures using a variational autoencoder. Yoon et al. [ 33]train the model with adversarial loss and reconstruction loss togenerate gestures from trimodal contexts. HA2G [ 25] adopts ahierarchical decoder to address the structural information of thejoint. Gesturemaster[ 37] used a rhythm embedding module, styleembedding module, motion graph construction, and graph-basedoptimization to extract features and generate gestures.2.2 Semantic gesture generationKim et al. [ 16] generates gestures with the semantics itself or ex-tracted from text. The method with an intent classifier emphasizesco-speech gesture generation. The co-speech gesture model is se-lected to generate gestures if the intent is unclear, else this method1https://github.com/GT-KIM/GENEA2023-KU-ISPLis used to synthesize gestures. SEEG [ 23] generates semantic en-ergized co-speech gestures with the semantic prompt gallery, se-mantic prompter, and semantic energized learning. Gesticulator[ 19]synchronizes between text and audio features in the encoding phaseand generates gestures by autoregression.2.3 Diffusion-based motion generationAlexanderson et al. [ 2] proposed conformer[ 10]-based diffusionmodels for gesture generation, dance synthesis, and path-drivenlocomotion. Zhu et al. [ 38] migrated the diffusion model to speech-driven co-speech gesture generation with diffusion gesture stabi-lizer and implicit classifier-free guidance. FLAME [ 17] generatesand edits human motion with the pre-trained language model andtransformer. Motiondiffuse[ 36] and MDM[ 30] also synthesize hu-man motions from text descriptions. [ 3] learns a gesture-transcriptjoint embedding space using contrastive learning. The learned em-beddings are incorporated into the diffusion model via an adaptiveinstance normalization layer. [ 5] synthesize motions by diffusionmodel using latent space. The motion representations are projectedinto latent space, diffused, and reconstructed to the original motionspace.3 CO-SPEECH GESTURE GENERATIONMODELFigure 1 depicts an overview of the proposed model to generatehigh-fidelity co-speech gestures. In this section, we first introducethe problem formulation of co-speech gesture generation (Section3.1). We propose the gesture autoencoder, which is designed toproject gesture space to feature space (Section 3.2). We then presentthe classifier-free guidance for applying speech conditions to co-speech gestures (Section 3.3). Furthermore, we establish the forwarddiffusion and the reverse conditional generation process in featurespace (Section 3.4).3.1 Problem FormulationThe co-speech gesture training data often consist of 3D pose se-quence x, audio a, text(sentence) s, and metadata. The generativemodel G parameterized by θis optimized to synthesize x, whichis further conditioned on the audio a, text s, and the pre-definedinitial poses x−1of the M frames. The learning objective of theproblem can be formulated as argminθ||x−Gθ(a,s,x−1)||.However, samples in the training data often have a long dura-tion. To reduce the computational cost and memory usage, everymodality of the sample is cropped into segments x={x1,...,xi},a={a1,...,ai}, and s={s1,...,si}, where xihas N frames andai,sihave the same time length as xi. Now the generative model Gestimates xifrom the audio ai, text si, and the M pose frames fromprevious segment x(N−M):Ni−1, instead of synthesizing xat once. Fi-nally, the generative model G synthesizes the gestures {x1,...,xi}continuously.The model is autoregressive because the poses generated by theprevious segment are used to synthesize the current segment, andstochastic because the initial diffusion feature map is random noise.The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceFigure 1: Overview of the proposed diffusion-based co-speechgesture generation method. The model is autoregressive andprobabilistic. For the N-th generation, audio, text, and pre-poses are projected to the latent space and used to conditions.The initialized Gaussian noise is iteratively diffused by thereverse process. The output latent vector is reconstructed tothe gesture space by the decoder.3.2 Gesture AutoencoderIn the Stable Diffusion[ 28], the latent diffusion model provides flex-ible, computationally tractable, and sometimes achieving qualityimprovement. The gesture autoencoder focus on finding good latentembedding space projected from gesture space. The gesture autoen-coder consists of two autoencoder models: pose autoencoder andmotion autoencoder. Since the gesture is the sequential pose data,we design the pose autoencoder for projecting the raw pose spaceto latent space, and the motion autoencoder to find correlationsalong the time axis.The pose encoder and decoder consist of 3 fully-connected layerswith dropout[ 29] and GELU activation function[ 11] each. The inputposes sequence xN×3Jis projected to z′N×Dby the pose encoder,where z′denotes mid-level hidden representation, J is the number ofjoints, and D is the dimension of z′, and the pose decoder performsreverse projection. The pose autoencoder is first trained with L1reconstruction loss. Once the pose autoencoder is optimized, theparameters are frozen in the rest training stages such as diffusiontraining stage.The motion autoencoder aims to capture sequential informationof the data. Thus, the motion encoder and decoder consist of 3gated recurrent units(GRU) layers[ 6] and 3 multi-head self-attentionlayers[ 31], which have strong capacity in sequential data modeling.The motion encoder is formulatedz=MHSA(GRU(z′)) (1)whereMHSA(X)=Attention(X,X,X). The attention mechanismisAttention(Q,K,V)=softmax(QKT√d)·V (2)where Q, K, and V are the query, key, and value from the featurematrix, d is the channel dimension, and T is the matrix transposeoperation.The mid-level hidden representation z′N×Dis projected to zN×Dby the motion encoder, where zdenotes hidden representation infeature space, and the motion decoder performs reverse projection.The motion autoencoder is individually trained with L1 reconstruc-tion loss. The parameters of the motion autoencoder are also frozenafter this training stage.3.3 ConditioningThe diffusion models are theoretically capable of modeling the con-ditional distribution p(z|y). This can be implemented with a condi-tional denoising autoencoder εθ(zt,t,y), wherey∈{a,s,zi−1}, toaddress the generation process through inputs y. To combine condi-tional information and latent vector in the U-Net backbone, we usea cross-attention mechanism, which is used in Stable Diffusion[ 28].The three modalities, which are audio, text, and pre-pose, areused as conditions in the diffusion process. The pre-processed audiofeatures, text features, and pre-pose features are projected to theembedding vectors by fully-connected layers. These three embed-ding vectors are added to the time embedding vector and propagatethe information of each modality to the denoising U-Net model.3.4 DiffusionDDPMs define the latent variable models of the form pθ(x0)=∫pθ(x0:T)dx1:T, wherex1:Tare latent variables in the same samplespace asx0with the same dimensionality.The forward process, which is also called the diffusion process,approximates the posterior distribution q(x1:T|x0)by the Markovchain that gradually adds Gaussian noise to the data according tothe variance schedule β1,...,βT:q(x1:T|x0)=TÖt=1q(xt|xt−1), (3)whereq(xt|xt−1)=N(xt;√︁1−βtxt−1,βtI). (4)The forward process variances βtcan be learned by reparameteri-zation or held constant as hyperparameters. Since our model usesgesture autoencoder for mapping from pose to latent embeddings,the latent embeddings are gradually corrupted by noise, whichfinally leads to a pure white noise when T goes to infinity. There-fore, the prior latent distribution of p(xT)isN(xt;0,I)with onlyinformation of Gaussian noise.The reverse process estimates the joint distribution of pθ(x0:T).It is defined as a Markov chain with learned Gaussian transitionsICMI’23, October 09–13, 2023, Paris, France Kim et al.starting atN(xt;0,I):pθ(x0:T=p(xT)TÖt=1pθ(xt−1|xt), (5)wherepθ(xt−1|xt)=N(xt−1;μθ(xt,t),Σθ(xt,t)). (6)The corrupted noisy latent embedding xtis sampled by q(xt|x0)=N(xt;√ ̄αtx0,(1− ̄αt)I), whereαt=1−βtand ̄αt=Îts=1αs.Since the problem is co-speech gesture generation, which isa conditional generation problem, we have to provide additionalinputs a,s, and zi−1to the model. Therefore, these conditions areinjected into the generation process. The reverse process of eachtimestep can be updated for our problem as:pθ(zt−1|zt,y)=N(xt−1;μθ(zt,t,y),βtI). (7)The reverse process is started by sampling a Gaussian noiseztN(0,I)and following the Markov chain to iteratively denoisethe latent variable xtvia Eq. 7 to get the original latent vector z0.The variational lower bound on negative log-likelihood is usedto optimize the diffusion model. We follow [ 12] to simplify thetraining objective to the ensemble of MSE losses as:L(θ)=Et,x0,ε[||ε−εθ(√ ̄αtx0+√1− ̄αtε,y,t)||2], (8)where t is uniformly sampled between 1 and T, and εis initializedasN(0,I). The diffusion model is trained by the gradient descentsteps on Eq. 8 until converged.4 EXPERIMENT4.1 Data ProcessingWe trained our model using the GENEA Challenge 2023 dataset[ 35],derived from the Talking with hands 16.2M dataset[ 21]. This datasetcomprises a training set containing 371 clips, a validation set with40 clips, and a test set encompassing 70 clips. Each clip consistsof audio recordings, transcriptions, gesture motions for the mainagent, gesture motions for the interlocutor, and associated metadata.The audio data possesses a sampling rate of 44100Hz. The gesturemotions are formatted in BVH (Biovision Hierarchy) format, andtheir frame rate is set at 30 frames per second (FPS).Our system exclusively utilizes audio and text data from the mainagent, disregarding the interlocutor’s information and metadata. Weextract the mel-spectrogram, mel-frequency cepstrum coefficients,and prosody features using n-fft=4096 and a hop length of 33ms.To extract audio features, we employed the Librosa[ 27] packageand the Parselmouth[ 13] library. The network output comprisesjoint angles relative to a T-pose, with these angles parameterizedusing the exponential map[ 9]. Each dimension is normalized tohave a mean of zero and a standard deviation of one across theofficial challenge training set. We selected a total of 26 joints forfull-body expression. Subsequently, we apply a Savitzky-Golayfilter[ 26] with a window length of 9 and a polynomial order of 3 tosmooth the generated gestures. For text segmentation, we employ apre-trained text embedding model[32], featuring 1024 dimensionsper sentence. We opted for sentence embedding due to its capacityto capture semantic information in contrast to word embeddings.Given that the audio, text, and gesture data are temporally aligned,Table 1: Detailed hyperparameters settingHyperparameter Value# of joints (J) 26# of pre-pose frames (M) 8# of frames of the segment (N) 128Denoising diffusion steps 1000Feature dimension (D) 128Condition vector dimension 512# of residual blocks per up/downsampling layer 2# of up/downsampling layers 4# of attention heads 4N-FFT 4096Hop length [ms] 33Text embedding dimension 1024optimizer AdamWlearning rate 1e-4batch size 8the timing of audio features, text embeddings, and pose sequencesare synchronized.5 DISCUSSIONIn this section, we provide some discussions about evaluation re-sults. The submitted co-speech gestures are measured by threeaspects: human likeness, appropriateness for agent speech, andappropriateness for the interlocutor. The natural motion, monadicbaseline, and dyadic baseline are labeled NA, BM, and BD, respec-tively. Our submitted entry name is named SA. Our gesture gener-ation system is tested on a Windows 10 desktop with a 3.20GHzi9-12900K CPU, 128GB RAM, and one RTX 3090 GPU.5.1 Human-likenessThe results of the evaluation are presented in Table 2 and Figure 2.Our submitted system achieves a median human-likeness score of30 and a mean human-likeness score of 32.0. A disparity in humanlikeness is observed between our entry and natural motions. One ofthe significant contributing factors to this phenomenon is the lackof structural information. By not capturing the interdependenciesamong joints, our model generates gestures with a predominantemphasis on arm movements, which tend to exhibit greater motioncompared to head or body joints. Since the movement of the centerof gravity of the agent is ignored by the above reason, the humanlikeness score may decrease. Furthermore, our system omits fingermotions from its generation process. Another conceivable concernis the effectiveness of smoothing techniques. Despite the applicationof a smoothing filter, the motions produced by our system some-times appear to lack smoothness. Potential factors contributing tothese results encompass suboptimal optimization of the smoothingfilter and an insufficient number of pre-pose instances.5.2 AppropriatenessIn respect to the appropriateness of speech exhibited by main agent,Table 3 and Figure 4 provide a description indicating that our entryachieves a preference-matching score of 54.8%. The outcomes ofThe KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceTable 2: Summary of the collective perception study with a0.05 confidence interval about human-likeness. Our entry isSA.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualizing the rating distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are themean ratings (also with a 0.05 confidence interval). Box edgesare at 25 and 75 percentiles, while whiskers cover 95% of allratings for each condition.the assessment, which focuses on the appropriateness of interlocu-tor speech, are displayed in Table 4 and Figure 6. Our developedsystem attains a preference-matching score of 53.5%. We present aconcise overview of several configurations within our experimen-tal framework, which we posit may contribute to enhancing theappropriateness of gestures about speech. One potential rationalewe identify pertains to semantic conditioning. Our system employs...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between con-ditions. White means that the condition listed on the y-axisrated significantly above the condition on the x-axis, blackmeans the opposite ( yrated below x), and grey means nostatistically significant difference at the level α=0.05afterHolm-Bonferroni correction.a pre-trained sentence embedding model without fine-tuning. How-ever, numerous textual segments in the data fail to adhere to propersentence structure. Consequently, the embedding might inaccu-rately convey the semantics of these text segments. To mitigate thisconcern, we will change the sentence embedding model to a wordembedding model, or utilization of extended segments.Furthermore, timing synchronization is a consideration. Giventhat our system incorporates speech features such as mel-spectrogram,MFCC, and prosody to extract temporal information from audio,the model learns to effectively synchronize audio with gestures.Additionally, the pre-pose condition aids in capturing the initia-tion timing. Consequently, the proposed model demonstrates thecapability to regulate the timing of speech onset and pauses.Moreover, we address the issue of gesture smoothness. The gener-ated gesture results from our system sometimes exhibit irregularity.We hypothesize that the phenomenon may be attributed to thearchitecture of the pose autoencoder, the pre-poses, and the extentof the smoothing filter employed. A more intricate exploration ofthese factors will be conducted in the ablation study section.We propose potential methods for enhancing the performanceof our system concerning both the main agent and interlocutorspeech appropriateness. Initially, the model could incorporate inter-locutor gestures, audio, and text as conditioning factors. Secondly,incorporating a more extensive history of features from both themain agent and interlocutor into the conditioning process mightyield improved gesture generation. Thirdly, the meticulous designof the text embedding model and gesture autoencoder could en-hance semantic conditioning and the inherent naturalness of thegenerated gestures, respectively. These specific aspects will be thefocal points of our future works.ICMI’23, October 09–13, 2023, Paris, France Kim et al.Table 3: Summary statistics of user-study responses fromappropriateness for main agent speech, with confidence in-tervals for the mean appropriateness score(MAS) at the levelα=0.05."Pref. matched" identified how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualizing the response distribution in theappropriateness for main agent speech. The blue bar(bottom)represents responses where subjects preferred the matchedmotion, the light grey bar(middle) represents tied responses,and the red bar(top) represents responses preferring mis-matched motion, with the height of each bar being propor-tional to the fraction of each category. Lighter colors corre-spond to slight preference, and darker colors to clear prefer-ence. On top of each bar is also a confidence interval for themean appropriateness score, scaled to fit the current axes.The dotted black line indicates chance-level performance.5.3 ablation studyWe conduct an ablation study to ensure that autoregression is help-ful to co-speech gesture synthesis. We calculate Frechet GestureDistance(FGD), between ground truth and generated motions in theNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Significant differences between conditions in theappropriateness for main agent speech. White means thecondition listed on the y-axis achieved a MAS significantlyabove the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significantdifference at level α=0.05after correction for the false dis-covery rate.Table 4: Summary statistics of user-study responses fromappropriateness for interlocutor speech, with confidence in-tervals for the mean appropriateness score(MAS) at the levelα=0.05."Pref. matched" identified how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014validation set, which are shown in Table 5. As a result, the FGD ofdiscriminator features and raw gestures are improved when usingthe pre-pose condition.The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 6: Bar plots visualizing the response distributionin the appropriateness for interlocutor speech. The bluebar(bottom) represents responses where subjects preferredthe matched motion, the light grey bar(middle) representstied responses, and the red bar(top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of each category. Lightercolors correspond to slight preference, and darker colors toclear preference. On top of each bar is also a confidence in-terval for the mean appropriateness score, scaled to fit thecurrent axes. The dotted black line indicates chance-levelperformance.NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 7: Significant differences between conditions in theappropriateness for interlocutor speech. White means thecondition listed on the y-axis achieved a MAS significantlyabove the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significantdifference at level α=0.05after correction for the false dis-covery rate.6 CONCLUSIONIn this paper, we introduce an innovative diffusion-based co-speechgesture generation framework that has been submitted to the GE-NEA Challenge 2023. Our approach aims to produce co-speechTable 5: Effects of autoregression.Model FGD(feature) FGD (raw)w/o. pre-pose 154.984 4977.059w. pre-pose 77.909 2279.612gestures of high fidelity, achieved by proposing a gesture autoen-coder for effective domain transfer between the gesture space andlatent feature space. Furthermore, we leverage denoising diffusionprobabilistic models to address the challenge of co-speech ges-ture generation. While the comprehensive results indicate that ourmethod achieves a preference-matching score of 54.8% and 53.5%for appropriateness of main agent speech and interlocutor speech,respectively.Moreover, we conduct an in-depth ablation stud to affirm theutility of autoregressive methods in co-speech gesture synthesis.Our conclusion highlights the strengths of our system in timing syn-chronization and the generation of contextually fitting gestures forinteractive scenarios. Additionally, we propose several forthcomingchallenges for research, such as refining the structures of semanticembeddings and gesture embedding models. Our hope is that ourapproach contributes not only to the advancement of diffusion-based gesture generation research but also finds application acrossvarious gesture generation domains.ACKNOWLEDGMENTSThis work was supported by the "Development of cognitive/responseadvancement technology for AI avatar commercialization" projectfunded by the Brand Engagement Network(BEN)[Q2312881].
NrFzZ62KOHB
This entry does not seem to provide any significant contributions.
3: Clear rejection
The manuscript was a bit difficult to read due to poor English quality. It is highly recommented that the authors run their future submissions through a proofing service. There are even a few loose incomplete sentences which are totally disconnected. In general there were parts that were difficult to follow due to the poor discourse. Just an example: "To optimize the diffusion model, the variational lower bound on negative log-likelihood." The approach does not seem to be very original, given that a few other authors have experimented with diffusion-based motion generation and other similar mechanisms described in this approach. The authors present a decent literature review so they seem knowledgeful of the current state of the art. The authors do not clarify what, if any part of their contribution, is novel, or and enhancement, compared to existing methods. The system presented considers only the main agent audio and text manuscript, and fully ignores the interlocutor's information. It scored quite low compared to other entries, in terms of human-likeness and appropriateness for main agent speech. By mere chance, it seems to have scored very high on the appropriateness for the interlocutor's speech. The authors claim this as an achievement which to me, sounds very dishonest. They affirm that their system fully ignores the data from the interlocutor's speech, therefore, the fact that they scored high on this measure can only mean that the scoring methodology is flawed, and that this happened by mere chance. This could have been mentioned, however they seem to claim that it was an achievement. Given that that I did not find a significant novel contribution, any advancement against the current state of the art, namely because the approach scored very low in general (and only by change in the interlocutor case), plus the fact that it is difficult to read and follow, and that it seems to me that the authors were dishonest and overclaimed that their system performs well against the interlocutor's speech when, despite the score, it is clearly impossible given that it ignores such information, I recommend rejecting this entry.
3: The reviewer is fairly confident that the evaluation is correct
pVBKLqpAUtP
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation
["Vladislav Korzun", "Anna Beloborodova", "Arkady Ilin"]
This paper describes FineMotion's entry to the GENEA Challenge 2023. We explore the potential of DeepPhase embeddings by adapting neural motion controllers to conversational gesture generation. This is achieved by introducing a recurrent encoder for control features. We additionally use VQ-VAE codebook encoding of gestures to support dyadic setup. The resulting system generates stable realistic motion controllable by audio, text and interlocutor's motion.
["embodied agents", "neural networks", "gesture generation", "social robotics", "deep learning", "phase manifold"]
ABSTRACTThis paper describes FineMotion’s entry to the GENEA Challenge2023. We explore the potential of DeepPhase embeddings by adapt-ing neural motion controllers to conversational gesture generation.This is achieved by introducing a recurrent encoder for control fea-tures. We additionally use VQ-VAE codebook encoding of gesturesto support dyadic setup. The resulting system generates stable real-istic motion controllable by audio, text and interlocutor’s motion.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSembodied agents, neural networks, gesture generation, social ro-botics, deep learning, phase manifoldACM Reference Format:Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2023. The FineMo-tion entry to the GENEA Challenge 2023: DeepPhase for conversationalgestures generation. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23), October 9–13, 2023, Paris, France. ACM, New York,NY, USA, 6 pages. https://doi.org/10.1145/3577190.36161191 INTRODUCTIONThe automatic generation of conversational gestures for 3D humanmodels is one of the most opportune problems in character anima-tion. It can be used to simplify video game production and increasethe realism of characters’ movements. Furthermore, as visual as-sistants or VTubers are becoming more popular, the demand forrealistic gestures for embodied virtual agents is also growing.The task of automatic gesture generation from speech has gotseveral promising solutions. During GENEA Challenge 2022 [ 25]one of the approaches was rated even better than real motion cap-ture data by motion quality [ 27]. However, the task at hand isbecoming more complicated year by year.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.3616119The current GENEA Challenge 2023 [ 15] considers a dialoguesetup. Thus, the participants’ systems should not only considerinput speech but also the conversation partner’s behaviour. As wellas in the previous year «Talking With Hands 16.2M» dataset [ 16]was used, but now each sample contains two sets of motion, audioand text for the main agent and the interlocutor.In relative tasks of condition-based motion generation [ 23] andcharacter controllers [ 26] researchers propose slightly different ap-proaches, that could also benefit conversational gestures generation.One of the most promising approaches for animation representationwas presented in [ 19]. Taking into account that motion curves couldbe considered as periodic functions, they could be decomposed viaFourier Transform to obtain high-level features.Thus, we decided to examine the phase manifold formed by Deep-Phase’s Periodic AutoEncoder in conversational gesture generation.In order to properly address the dyadic setup of the challenge, weimplemented additional interlocutor gesture representation basedon VQ-VAE codebook encoding. Evaluation [ 15] showed that oursystem generates realistic motion which is statistically suitable forthe interlocutor’s behaviour. However, our system showed poorresults on appropriateness for speech, which suggests the needfor further development. Our code along with video examples ofgenerated motion is publicly available1to help other researchersreproduce our results.Our paper is organized as follows: Section 2 gives an appropriateoverview of related work; Section 3 describes our approach gen-erally; Section 4 details generator model input and output format;Section 5 gives results from the evaluation and discusses our results;and Section 6 is for the conclusion.2 RELATED WORKIn this section, we give a general overview of recent conversationalgesture generation approaches. Then we describe some existingapproaches for solving close tasks, that inspired our solution.2.1 Conversational gestures generationThe task of conversational gestures generation has been advanc-ing for several years. Starting from window-based frame-by-framegeneration [ 13] end-to-end approaches lead to auto-regression [ 14].Later, the GENEA Challenge 2022 offered many successful systems.Some of them are based on recurrent models [ 4,6,24], and some1https://github.com/FineMotion/GENEA_2023ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.even utilise GPT-like large architectures [ 18], but the most success-ful hybrid approach was presented in [ 27], where authors use thegraph-based model to transfer between short clips.Slightly weaker results were shown by clear auto-regressiveapproaches [ 11,12], that faced the main shortcoming of such ar-chitectures - converging to mean pose. In [ 12] as well as in [ 14]authors tried to overcome this problem by adding different teacher-forcing techniques to force models first to extract appropriate audiorepresentation. However, auto-regressive approaches have shownsignificant success without such techniques in a different task: char-acter controllers.2.2 Character controllersThe task of creating automatic character controllers is related tolocomotion movements [ 8]. The controlled character should movejoints with respect to the environment and user input. Many data-driven character controller approaches use a mixture-of-experts[10] framework, for example, Mode Adaptive Neural Networks(MANN) [26].Later, the MANN model was improved with local phases [ 20]. Lo-cal phases are computed as a derivative from block function contain-ing binary states of whether bone contacts the object/environment.The efficiency of the proposed approach was demonstrated in cre-ating a neural motion controller for a basketball game, where theblock function represented a player’s contact with the ball or thefloor.Finally, in [ 19] the unsupervised approach for automatic phaseextraction was suggested. The proposed Periodic AutoEncoderextracts periodic features from motion curves after training onunstructured motion datasets. The architecture utilizes a tempo-ral convolutional autoencoder [ 9] additionally applying real FastFourier Transform to each channel of latent space. The obtainedperiodic features then were used to train the motion controller asbefore showing the capability of extracted features.2.3 Text-to-Gesture Animation GenerationThe task of generating human gesture animations from textualprompts involves generating expressive and natural-looking ges-tures that correspond to a given textual input. For example, in thework of [ 7] the authors suggest jointly encoding gestures, text andimages into a single latent space using Contrastive Language-ImagePretraining (CLIP) [ 2]. Also, in GestureDiffuCLIP [ 21] the authorscombined the power of CLIP and diffusion models to generaterealistic and diverse gesture animations from text. To enable theencoding and decoding of gestures, the Vector Quantized Varia-tional Autoencoder (VQ-VAE) [ 1] was used. Additionally, VQ-VAEhas proven to be a valuable tool beyond text-to-gesture generation.In the context of conversational gestures, recent research [ 18] and[22] applied the VQ-VAE to encode and decode gestures, achievingimproved gesture generation performance.3 SYSTEM OVERVIEWOur approach follows the original DeepPhase paper [ 19]. It containstwo main stages: training Periodic AutoEncoder to extract phase fea-tures and building neural motion controller upon extracted phases.The motion controller is based on a mixture-of-experts frameworkalso mentioned in the DeepPhase paper with some ideas from pre-vious author’s work [ 20]. The main difference between our systemand those mentioned above is that we use an auxiliary recurrentControl Variables Encoder to guide motion by audio, text and inter-locutor’s motion instead of the user’s input. Apart from that, wetrained an additional encoder for the interlocutor’s motion and sup-plemented control features with the obtained latent representation.3.1 DeepPhase embeddingsTo prepare the phase manifold we follow the proposed pipelinefrom [ 19] exactly. To train Periodic AutoEncoder (PAE) we firstextract positions from the main agent’s motion data. We use allmotion files, but extract positions for 26 joints, including world rootand excluding fingers. Then we calculate joint position velocitiesand smooth them via Butterworth Filter [3].The training configuration of PAE is as follows: training samplecontains 61 frames and covers a 2-second window with 26*3 chan-nels. The number of latent channels (phases) is equal to 8, followingthe dancing pipeline from the official repository2. The number ofintermediate channels is equal to the number of joints. The model istrained during 150 epochs with batch size equal to 512 and AdamWoptimizer with Cyclic Learning Rate Scheduler with Restarts [ 17]with weight decay and learning rate both equal to 10e-4, restartperiod equal to 10, multiplier equal to 2 and cosine policy.The obtained model extracts phase features as in the originalpaper. From each time window tit extracts amplitude ( A), frequency(F), offset (B) and phase shift ( S).A,F,B,S∈RM, whereM- numberof latent channels (or phases). Phase manifold P∈R2Mfor frametis computed byP(t)2i−1=A(t)i·sin(2π·S(t)i),P(t)2i=A(t)i·cos(2π·S(t)i).(1)To obtain phase features P∈RT×2Mfrom motion with length Twe just extract the phase manifold from the sliding window, i.e.P={P(t)|t∈ [1,T]}. In order to illustrate the periodicity ofextracted phase features the Figure 1 shows them separated bylatent channel on a 10-second sample.Figure 1: Extracted phase features exampleDue to the fact that PAE is trained on joint velocities, obtainedphases can not be used as intermediate representations of motionsinstead of original data to train motion generator. The problem liesin the difficulty of converting joint positions into joint rotationswithout the introduction of kinematic constraints. To overcomethis we also tried to train PAE on joints rotations. Unfortunately,obtained phase manifold does not look like periodic function as2https://github.com/sebastianstarke/AI4AnimationThe FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, Francebefore. PAE trained on angle velocities could theoretically showsbetter results, but we decided to stop on phase manifold trained onjoint velocities.3.2 Generation modelOur motion generation model extends the mixture-of-experts frame-work from [ 19]. It contains two feedforward neural networks: Gat-ing Network and Motion Prediction Network. The model’s notationfollows [20].The Gating Network is built upon a stack of linear layers withELU[ 5] activations between them. It takes phase features and pre-dicts weights for experts. In our case, there are 8 experts. Then,the Motion Prediction Network uses these weights to make linearcombinations over experts. The Motion Prediction Network itselfconsists of several "Expert Layers" with ELU activations betweenthem. Each of layer Euses experts weights α={αi,i∈[1,N]}andinputxas follows:E(x,α)=N∑︁1αi(Wix+bi) (2)whereWi∈Rh×mandbi∈Rhare weights and biases respec-tively withmandhbeing input and output dimensions respectively.As in the original DeepPhase repository, the number of "Expertlayers" as well as the number of linear layers on the Gating Networkis equal to 3.3.3 Control Variables EncoderInitially, the input and output data formats were similar to [ 20].However, significant changes were introduced. As control variablesinput, we use a similar time window of audio features. But the morecontrol features like text and interlocutor’s pose we added, thelarger the control variables vector would become. So we decided toadd an additional recurrent encoder of control features based onBi-directional GRU over the FeedForward Highway as in [ 12] toshorten this vector. It takes time-window features around the cur-rent frame and returns the output vector from RNN correspondingto the considered frame.3.4 Interlocutor Gesture EncoderModel. To effectively respond to the gestures of the interlocutor,our model leverages the Interlocutor Gesture Encoder, a crucialcomponent based on the VQ-VAE framework from [ 1]. This modelshowed good results in gesture coding, as shown in [ 18] and [ 22].The Interlocutor Gesture Encoder enables us to encode high-qualityrepresentations of gestures into compact vectors.For better learning, we have added improvements such as ex-ponential codebook smoothing and discarding unused vectors, assuggested in the original article.Data processing. To train the VQ-VAE model, we segment ges-tures into gaps according to the bits in the audio. This idea wasproposed in the [ 22]. The authors proposed dividing gestures intosegments that align with the rhythmic structure of the audio, asit is believed to capture the salient aspects of the gestures. Themaximum number of frames in one gesture’s sample with this ap-proach is equal to 18. This approach has shown promising resultsin capturing the temporal dynamics and synchronizing gestureswith the corresponding audio cues. Building upon this concept, weadopt a similar data processing strategy in our study to leveragethe benefits of aligning gestures with the rhythmic elements of theaudio. During training, the network is fed with only those gesturesamples from both partners in which at least one conversationalpartner was speaking. Each selected sample corresponds to thespeaker’s audio bits. During inference, we feed only interlocutor’sgestures corresponding to the active speaking person’s audio bits.In order to determine the moments of speech, we use a text tran-script. If there is no active speaker at the moment, main agent’saudio bits are chosen for guidance.Training. We train the VQ-VAE model with codebook size 2048.The dimensional of codebook vectors was 256. Codebook occupancyreaches 70%. The model was trained over 152 epochs.Inference. To feed the interlocutor’s gestures into the main model,we split the interlocutor’s audio into bits, then we extract vectorsfor each sample. After that, we duplicate each vector to the size ofa bit. Thus, we get the number of vectors equal to the number offrames in the original gesture.4 GENERATOR INPUTS AND OUTPUTSFigure 2: Generator modelThe overall system is illustrated in Figure 2. The model takes theinformation from the current frame and predicts the next frame.We use a notation of a time series window similar to [ 20], i.e.Tt1t0represents features collected within a time window t0≤t≤t1.Following is the description of the final data formats.Inputs. Generator’s input consists of 3 components XSi,XAi,XPi.Character state XSioni-th frame consists of concatenated jointsrotations and velocities. We also initially used joint positions, butwe observed that the model is more stable without them. We repre-sent joint rotations via 6D continuous representation from [ 28] toeliminate cases when Euler’s angles have values equal to 0 or 180degrees. Joint velocities were preliminary smoothed as in the PAEtraining routine. It’s also worth mentioning that character stateand phases were preliminary normalized.Control variables XAiare time-windowT1s−1sfeatures aroundthe current frame, which is passed to Control Variables Encoderto obtain one control vector XCi, which will be concatenated withcharacter state as the main input to Motion Prediction Network.As initial control features, we extract 26 MFCCs from audio, GloVeembedding of size 50 and obtained codebook encoding from VQ-VAE with respect to motion frame rate which is equal to 30 FPS.ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.To align text and interlocutor’s features we distribute them evenlywithin frames corresponding to time span. We also tried othercombinations, including interlocutor’s speech, but they showedless stable results. We decided to make the dimension of XCiequaltoXSi.Motion Phases XPi=Θi∈R2KTare extracted phase featuresvia PAE uniformly sampled from time-window T1s−1sand concate-nated into one vector, i.e. Θi={P(i−30),...,P(i−5),P(i),P(i+5),...,P(i+30)}considering that 13 frames are sampled in the win-dow.Outputs. Our Motion Prediction Network output contains only 2components: the next frame character state YSi+1, which is similarto input one, and future motion phases YPi+1={Θi+1,ΔΘi+1}con-taining not only phases, but phases’ velocity for time-window T1s0swith respect to frame i+1, i.e =Θi+1={P(i+1),P(i+6),...,P(i+31)}with 7 frames total.Training. The model is trained to predict the next frame basedon the current frame, it does not use outputs from the previous step- every frame is taken from the dataset directly and is processedindependently. All parts of the generator are trained simultaneouslyend-to-end during 50 epochs with batch size equal to 2048 and adefault Adam optimizer with a learning rate equal to 10e-4. Thehidden sizes of the Gating Network and the Motion PredictionNetwork are 64 and 1024 respectively.Inference. Finally, during inference, our model predicts the nextframe based on the previous one and follows an auto-regressivefashion. We also blend phases between iterations, before passingthem to the next step: Θ′i+1=λΘi+1+(1−λ)(Θi+ΔΘi+1)withλ=0.5.5 RESULTS AND DISCUSSIONAs in previous challenges, organizers provided a comprehensivehuman evaluation of participating systems[ 15]. This time 3 mainsubjective measures are considered: human likeness, appropriate-ness to speech and appropriateness to the interlocutor’s behaviour.Human-likeness estimates the overall quality of generated mo-tion without taking into account the agent’s speech or interlocutor’sbehaviour. Our approach, indexed SL, shows competitive results(median score is 51∈[50,51]in Table 1) indicating the ability ofDeepPhase embeddings to maintain periodicity and as a result therealism of predicted motion. Although our model is rated ratherwell, it does not reach the quality of natural motions or state-of-the-art approaches.In order to estimate the appropriateness of agent speech, evalu-ation participants were given two motion clips generated by onemodel using separate audio samples and tasked to distinguish whichof the two motion clips corresponds to the target listening sample.Good models generate motions that participants could easily deter-mine from one another by audio. The main quantity of interest inthe appropriateness evaluation is the mean appropriateness score(MAS). Unfortunately, our model provides poor appropriatenessresults ( 0.05±0.05MAS in Table 1). Organizers mentioned (section3.6 in [ 15]) that our solution does not statistically differ from chanceperformance. This leads us to suspect the weakness of used audioand text features.Table 1: Summary statistics of studiesCondi- Human-Likeness Agent Speech Interlocutortion Median Score MAS MASNA 71∈[70,71] 0.81±0.06 0.63±0.08BM 43∈[42,45] 0.20±0.05−0.01±0.06BD 46∈[43,47] 0.14±0.06 0.07±0.06SA 30∈[29,31] 0.11±0.06 0.09±0.06SB 24∈[23,27] 0.13±0.06 0.07±0.08SC 9∈[9,9]−0.02±0.04−0.03±0.05SD 45∈[43,47] 0.14±0.06 0.02±0.07SE 50∈[49,51] 0.16±0.05 0.05±0.07SF 65∈[64,67] 0.20±0.06 0.04±0.06SG 69∈[67,70] 0.39±0.07−0.09±0.08SH 46∈[44,49] 0.09±0.07−0.21±0.07SI 40∈[39,43] 0.16±0.06 0.04±0.08SJ 51∈[50,53] 0.27±0.06−0.03±0.05SK 37∈[35,40] 0.18±0.06−0.06±0.09SL 51∈[50,51] 0.05±0.05 0.07±0.06The addition to this year’s challenge is the introduction of theappropriateness metric for the main agent’s reaction to the inter-locutor’s behaviour. The study itself is similar to the previous onewith changing interlocutor’s motion. It is also conducted whilethe main agent is silent. Surprisingly, using the interlocutor’s mo-tion features yields better results ( 0.07±0.06MAS in Table 1) andsignificantly better than a chance (section 4.7 in [15]).Overall, our system shows promising results, more on human-likeness and appropriateness for the interlocutor. However, thereare ways to improve this approach by adding more compelling au-dio features or adding teacher forcing to make attention to speechfeatures. Nevertheless, using DeepPhase embeddings allow us totrain the model without suffering converging to a rest pose. Addi-tionally, VQ-VAE codebook encoding allowed the resulting solutionto accord the dyadic setup of conversation and generate plausiblereactions to interlocutor behaviour.6 CONCLUSIONSharing approaches between different tasks in the domain of mo-tion generation could significantly improve the overall state ofthe research community. Our system is based on an approach thatproved itself as a neural motion controller and showed promisingresults during evaluation. We assume that using periodic propertiesof motion could yield improvements in all problems connected withanimation. And DeepPhase embeddings are one of the latest andmost successful approaches to extract these properties, so we rec-ommend considering them as well as VQ-VAE codebook encodingduring the development of future models.Despite that our system showed relatively good results in thechallenge, there is room for improvement. For example, a betterspeech encoder or additional data filtering could be used. Themixture-of-experts framework could also be extended to work withsequences. Some teacher-forcing techniques could also be applied.The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, France
wJ2ZfxfvUz
A valuable adaptation of an important motion synthesis method to gestures
7: Good paper, accept
This paper describes a gesture generation model that closely follows the DeepPhase approach, consisting of two parts: 1. Training a Periodic Autoencoder that transforms motion in joint velocity space to a low-dimensional phase manifold. 2. Training a motion generation model using the learned phase features and the task-specific conditioning - in this case, both agents' speech (GloVe embeddings and MFCCs) and the interlocutor's movements (encoded with a separate VQ-VAE). These control variables are further encoded using a bi-directional GRU. The generator's input is a 2-second window containing: 1) character state (joint rotations and velocities), 2) the encoded control variables, and 3) motion phases for a random subset of frames. The network is trained to predict the motion features in the next frame based on the current frame only, while synthesis is done in an autoregressive fashion. The idea of using DeepPhase for gesture synthesis is definitely interesting. Although much of the work is based on the original DeepPhase paper, there are some nontrivial modifications, such as the ``Control Variables Encoder'' and the phase blending during inference. The results in the human-likeness study indicate that it might be worthwile to further build on this line of work. I also think negative results, such as the failure to train the PAE on joint angles, or the low specificity to speech in the outputs, are particularly insightful. *Overall, I think this paper is a valuable contribution to the gesture synthesis community.* I would encourage the authors to further share their results online, e.g., by open-sourcing their codebase (if possible) and uploading rendered videos. --- I have the following suggestions for the authors: 1) The abstract states that the motion is suitable for the interlocutor's behaviour. I personally find this statement too strong, since the observed effect sizes in that study seem miniscule compared to natural motion. 2) It would be valuable to describe the training run in terms of the number of optimisation steps instead of the number of epochs. The latter is not as useful without knowing the batch size. 3) It would be valuable to duplicate the "Condition" column of Table 1 (shortening the label to C.) so that the results of the two studies can be ordered separately. Alternatively, if space constraints allow, the two tables could be fully separated. 4) Below I am listing some of the typos I found during my review. * line 42: missing full stop * line 165: with [the] obtained latent representation * line 131: the architecture utilizes [a] temporal * line 252: >the more< the control variables vector would become -> >the larger< * lineTable 1 caption: Summary statistics of >appropriateness study< -> ... of >the appropriateness studies<
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
pVBKLqpAUtP
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation
["Vladislav Korzun", "Anna Beloborodova", "Arkady Ilin"]
This paper describes FineMotion's entry to the GENEA Challenge 2023. We explore the potential of DeepPhase embeddings by adapting neural motion controllers to conversational gesture generation. This is achieved by introducing a recurrent encoder for control features. We additionally use VQ-VAE codebook encoding of gestures to support dyadic setup. The resulting system generates stable realistic motion controllable by audio, text and interlocutor's motion.
["embodied agents", "neural networks", "gesture generation", "social robotics", "deep learning", "phase manifold"]
ABSTRACTThis paper describes FineMotion’s entry to the GENEA Challenge2023. We explore the potential of DeepPhase embeddings by adapt-ing neural motion controllers to conversational gesture generation.This is achieved by introducing a recurrent encoder for control fea-tures. We additionally use VQ-VAE codebook encoding of gesturesto support dyadic setup. The resulting system generates stable real-istic motion controllable by audio, text and interlocutor’s motion.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSembodied agents, neural networks, gesture generation, social ro-botics, deep learning, phase manifoldACM Reference Format:Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2023. The FineMo-tion entry to the GENEA Challenge 2023: DeepPhase for conversationalgestures generation. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23), October 9–13, 2023, Paris, France. ACM, New York,NY, USA, 6 pages. https://doi.org/10.1145/3577190.36161191 INTRODUCTIONThe automatic generation of conversational gestures for 3D humanmodels is one of the most opportune problems in character anima-tion. It can be used to simplify video game production and increasethe realism of characters’ movements. Furthermore, as visual as-sistants or VTubers are becoming more popular, the demand forrealistic gestures for embodied virtual agents is also growing.The task of automatic gesture generation from speech has gotseveral promising solutions. During GENEA Challenge 2022 [ 25]one of the approaches was rated even better than real motion cap-ture data by motion quality [ 27]. However, the task at hand isbecoming more complicated year by year.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.3616119The current GENEA Challenge 2023 [ 15] considers a dialoguesetup. Thus, the participants’ systems should not only considerinput speech but also the conversation partner’s behaviour. As wellas in the previous year «Talking With Hands 16.2M» dataset [ 16]was used, but now each sample contains two sets of motion, audioand text for the main agent and the interlocutor.In relative tasks of condition-based motion generation [ 23] andcharacter controllers [ 26] researchers propose slightly different ap-proaches, that could also benefit conversational gestures generation.One of the most promising approaches for animation representationwas presented in [ 19]. Taking into account that motion curves couldbe considered as periodic functions, they could be decomposed viaFourier Transform to obtain high-level features.Thus, we decided to examine the phase manifold formed by Deep-Phase’s Periodic AutoEncoder in conversational gesture generation.In order to properly address the dyadic setup of the challenge, weimplemented additional interlocutor gesture representation basedon VQ-VAE codebook encoding. Evaluation [ 15] showed that oursystem generates realistic motion which is statistically suitable forthe interlocutor’s behaviour. However, our system showed poorresults on appropriateness for speech, which suggests the needfor further development. Our code along with video examples ofgenerated motion is publicly available1to help other researchersreproduce our results.Our paper is organized as follows: Section 2 gives an appropriateoverview of related work; Section 3 describes our approach gen-erally; Section 4 details generator model input and output format;Section 5 gives results from the evaluation and discusses our results;and Section 6 is for the conclusion.2 RELATED WORKIn this section, we give a general overview of recent conversationalgesture generation approaches. Then we describe some existingapproaches for solving close tasks, that inspired our solution.2.1 Conversational gestures generationThe task of conversational gestures generation has been advanc-ing for several years. Starting from window-based frame-by-framegeneration [ 13] end-to-end approaches lead to auto-regression [ 14].Later, the GENEA Challenge 2022 offered many successful systems.Some of them are based on recurrent models [ 4,6,24], and some1https://github.com/FineMotion/GENEA_2023ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.even utilise GPT-like large architectures [ 18], but the most success-ful hybrid approach was presented in [ 27], where authors use thegraph-based model to transfer between short clips.Slightly weaker results were shown by clear auto-regressiveapproaches [ 11,12], that faced the main shortcoming of such ar-chitectures - converging to mean pose. In [ 12] as well as in [ 14]authors tried to overcome this problem by adding different teacher-forcing techniques to force models first to extract appropriate audiorepresentation. However, auto-regressive approaches have shownsignificant success without such techniques in a different task: char-acter controllers.2.2 Character controllersThe task of creating automatic character controllers is related tolocomotion movements [ 8]. The controlled character should movejoints with respect to the environment and user input. Many data-driven character controller approaches use a mixture-of-experts[10] framework, for example, Mode Adaptive Neural Networks(MANN) [26].Later, the MANN model was improved with local phases [ 20]. Lo-cal phases are computed as a derivative from block function contain-ing binary states of whether bone contacts the object/environment.The efficiency of the proposed approach was demonstrated in cre-ating a neural motion controller for a basketball game, where theblock function represented a player’s contact with the ball or thefloor.Finally, in [ 19] the unsupervised approach for automatic phaseextraction was suggested. The proposed Periodic AutoEncoderextracts periodic features from motion curves after training onunstructured motion datasets. The architecture utilizes a tempo-ral convolutional autoencoder [ 9] additionally applying real FastFourier Transform to each channel of latent space. The obtainedperiodic features then were used to train the motion controller asbefore showing the capability of extracted features.2.3 Text-to-Gesture Animation GenerationThe task of generating human gesture animations from textualprompts involves generating expressive and natural-looking ges-tures that correspond to a given textual input. For example, in thework of [ 7] the authors suggest jointly encoding gestures, text andimages into a single latent space using Contrastive Language-ImagePretraining (CLIP) [ 2]. Also, in GestureDiffuCLIP [ 21] the authorscombined the power of CLIP and diffusion models to generaterealistic and diverse gesture animations from text. To enable theencoding and decoding of gestures, the Vector Quantized Varia-tional Autoencoder (VQ-VAE) [ 1] was used. Additionally, VQ-VAEhas proven to be a valuable tool beyond text-to-gesture generation.In the context of conversational gestures, recent research [ 18] and[22] applied the VQ-VAE to encode and decode gestures, achievingimproved gesture generation performance.3 SYSTEM OVERVIEWOur approach follows the original DeepPhase paper [ 19]. It containstwo main stages: training Periodic AutoEncoder to extract phase fea-tures and building neural motion controller upon extracted phases.The motion controller is based on a mixture-of-experts frameworkalso mentioned in the DeepPhase paper with some ideas from pre-vious author’s work [ 20]. The main difference between our systemand those mentioned above is that we use an auxiliary recurrentControl Variables Encoder to guide motion by audio, text and inter-locutor’s motion instead of the user’s input. Apart from that, wetrained an additional encoder for the interlocutor’s motion and sup-plemented control features with the obtained latent representation.3.1 DeepPhase embeddingsTo prepare the phase manifold we follow the proposed pipelinefrom [ 19] exactly. To train Periodic AutoEncoder (PAE) we firstextract positions from the main agent’s motion data. We use allmotion files, but extract positions for 26 joints, including world rootand excluding fingers. Then we calculate joint position velocitiesand smooth them via Butterworth Filter [3].The training configuration of PAE is as follows: training samplecontains 61 frames and covers a 2-second window with 26*3 chan-nels. The number of latent channels (phases) is equal to 8, followingthe dancing pipeline from the official repository2. The number ofintermediate channels is equal to the number of joints. The model istrained during 150 epochs with batch size equal to 512 and AdamWoptimizer with Cyclic Learning Rate Scheduler with Restarts [ 17]with weight decay and learning rate both equal to 10e-4, restartperiod equal to 10, multiplier equal to 2 and cosine policy.The obtained model extracts phase features as in the originalpaper. From each time window tit extracts amplitude ( A), frequency(F), offset (B) and phase shift ( S).A,F,B,S∈RM, whereM- numberof latent channels (or phases). Phase manifold P∈R2Mfor frametis computed byP(t)2i−1=A(t)i·sin(2π·S(t)i),P(t)2i=A(t)i·cos(2π·S(t)i).(1)To obtain phase features P∈RT×2Mfrom motion with length Twe just extract the phase manifold from the sliding window, i.e.P={P(t)|t∈ [1,T]}. In order to illustrate the periodicity ofextracted phase features the Figure 1 shows them separated bylatent channel on a 10-second sample.Figure 1: Extracted phase features exampleDue to the fact that PAE is trained on joint velocities, obtainedphases can not be used as intermediate representations of motionsinstead of original data to train motion generator. The problem liesin the difficulty of converting joint positions into joint rotationswithout the introduction of kinematic constraints. To overcomethis we also tried to train PAE on joints rotations. Unfortunately,obtained phase manifold does not look like periodic function as2https://github.com/sebastianstarke/AI4AnimationThe FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, Francebefore. PAE trained on angle velocities could theoretically showsbetter results, but we decided to stop on phase manifold trained onjoint velocities.3.2 Generation modelOur motion generation model extends the mixture-of-experts frame-work from [ 19]. It contains two feedforward neural networks: Gat-ing Network and Motion Prediction Network. The model’s notationfollows [20].The Gating Network is built upon a stack of linear layers withELU[ 5] activations between them. It takes phase features and pre-dicts weights for experts. In our case, there are 8 experts. Then,the Motion Prediction Network uses these weights to make linearcombinations over experts. The Motion Prediction Network itselfconsists of several "Expert Layers" with ELU activations betweenthem. Each of layer Euses experts weights α={αi,i∈[1,N]}andinputxas follows:E(x,α)=N∑︁1αi(Wix+bi) (2)whereWi∈Rh×mandbi∈Rhare weights and biases respec-tively withmandhbeing input and output dimensions respectively.As in the original DeepPhase repository, the number of "Expertlayers" as well as the number of linear layers on the Gating Networkis equal to 3.3.3 Control Variables EncoderInitially, the input and output data formats were similar to [ 20].However, significant changes were introduced. As control variablesinput, we use a similar time window of audio features. But the morecontrol features like text and interlocutor’s pose we added, thelarger the control variables vector would become. So we decided toadd an additional recurrent encoder of control features based onBi-directional GRU over the FeedForward Highway as in [ 12] toshorten this vector. It takes time-window features around the cur-rent frame and returns the output vector from RNN correspondingto the considered frame.3.4 Interlocutor Gesture EncoderModel. To effectively respond to the gestures of the interlocutor,our model leverages the Interlocutor Gesture Encoder, a crucialcomponent based on the VQ-VAE framework from [ 1]. This modelshowed good results in gesture coding, as shown in [ 18] and [ 22].The Interlocutor Gesture Encoder enables us to encode high-qualityrepresentations of gestures into compact vectors.For better learning, we have added improvements such as ex-ponential codebook smoothing and discarding unused vectors, assuggested in the original article.Data processing. To train the VQ-VAE model, we segment ges-tures into gaps according to the bits in the audio. This idea wasproposed in the [ 22]. The authors proposed dividing gestures intosegments that align with the rhythmic structure of the audio, asit is believed to capture the salient aspects of the gestures. Themaximum number of frames in one gesture’s sample with this ap-proach is equal to 18. This approach has shown promising resultsin capturing the temporal dynamics and synchronizing gestureswith the corresponding audio cues. Building upon this concept, weadopt a similar data processing strategy in our study to leveragethe benefits of aligning gestures with the rhythmic elements of theaudio. During training, the network is fed with only those gesturesamples from both partners in which at least one conversationalpartner was speaking. Each selected sample corresponds to thespeaker’s audio bits. During inference, we feed only interlocutor’sgestures corresponding to the active speaking person’s audio bits.In order to determine the moments of speech, we use a text tran-script. If there is no active speaker at the moment, main agent’saudio bits are chosen for guidance.Training. We train the VQ-VAE model with codebook size 2048.The dimensional of codebook vectors was 256. Codebook occupancyreaches 70%. The model was trained over 152 epochs.Inference. To feed the interlocutor’s gestures into the main model,we split the interlocutor’s audio into bits, then we extract vectorsfor each sample. After that, we duplicate each vector to the size ofa bit. Thus, we get the number of vectors equal to the number offrames in the original gesture.4 GENERATOR INPUTS AND OUTPUTSFigure 2: Generator modelThe overall system is illustrated in Figure 2. The model takes theinformation from the current frame and predicts the next frame.We use a notation of a time series window similar to [ 20], i.e.Tt1t0represents features collected within a time window t0≤t≤t1.Following is the description of the final data formats.Inputs. Generator’s input consists of 3 components XSi,XAi,XPi.Character state XSioni-th frame consists of concatenated jointsrotations and velocities. We also initially used joint positions, butwe observed that the model is more stable without them. We repre-sent joint rotations via 6D continuous representation from [ 28] toeliminate cases when Euler’s angles have values equal to 0 or 180degrees. Joint velocities were preliminary smoothed as in the PAEtraining routine. It’s also worth mentioning that character stateand phases were preliminary normalized.Control variables XAiare time-windowT1s−1sfeatures aroundthe current frame, which is passed to Control Variables Encoderto obtain one control vector XCi, which will be concatenated withcharacter state as the main input to Motion Prediction Network.As initial control features, we extract 26 MFCCs from audio, GloVeembedding of size 50 and obtained codebook encoding from VQ-VAE with respect to motion frame rate which is equal to 30 FPS.ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.To align text and interlocutor’s features we distribute them evenlywithin frames corresponding to time span. We also tried othercombinations, including interlocutor’s speech, but they showedless stable results. We decided to make the dimension of XCiequaltoXSi.Motion Phases XPi=Θi∈R2KTare extracted phase featuresvia PAE uniformly sampled from time-window T1s−1sand concate-nated into one vector, i.e. Θi={P(i−30),...,P(i−5),P(i),P(i+5),...,P(i+30)}considering that 13 frames are sampled in the win-dow.Outputs. Our Motion Prediction Network output contains only 2components: the next frame character state YSi+1, which is similarto input one, and future motion phases YPi+1={Θi+1,ΔΘi+1}con-taining not only phases, but phases’ velocity for time-window T1s0swith respect to frame i+1, i.e =Θi+1={P(i+1),P(i+6),...,P(i+31)}with 7 frames total.Training. The model is trained to predict the next frame basedon the current frame, it does not use outputs from the previous step- every frame is taken from the dataset directly and is processedindependently. All parts of the generator are trained simultaneouslyend-to-end during 50 epochs with batch size equal to 2048 and adefault Adam optimizer with a learning rate equal to 10e-4. Thehidden sizes of the Gating Network and the Motion PredictionNetwork are 64 and 1024 respectively.Inference. Finally, during inference, our model predicts the nextframe based on the previous one and follows an auto-regressivefashion. We also blend phases between iterations, before passingthem to the next step: Θ′i+1=λΘi+1+(1−λ)(Θi+ΔΘi+1)withλ=0.5.5 RESULTS AND DISCUSSIONAs in previous challenges, organizers provided a comprehensivehuman evaluation of participating systems[ 15]. This time 3 mainsubjective measures are considered: human likeness, appropriate-ness to speech and appropriateness to the interlocutor’s behaviour.Human-likeness estimates the overall quality of generated mo-tion without taking into account the agent’s speech or interlocutor’sbehaviour. Our approach, indexed SL, shows competitive results(median score is 51∈[50,51]in Table 1) indicating the ability ofDeepPhase embeddings to maintain periodicity and as a result therealism of predicted motion. Although our model is rated ratherwell, it does not reach the quality of natural motions or state-of-the-art approaches.In order to estimate the appropriateness of agent speech, evalu-ation participants were given two motion clips generated by onemodel using separate audio samples and tasked to distinguish whichof the two motion clips corresponds to the target listening sample.Good models generate motions that participants could easily deter-mine from one another by audio. The main quantity of interest inthe appropriateness evaluation is the mean appropriateness score(MAS). Unfortunately, our model provides poor appropriatenessresults ( 0.05±0.05MAS in Table 1). Organizers mentioned (section3.6 in [ 15]) that our solution does not statistically differ from chanceperformance. This leads us to suspect the weakness of used audioand text features.Table 1: Summary statistics of studiesCondi- Human-Likeness Agent Speech Interlocutortion Median Score MAS MASNA 71∈[70,71] 0.81±0.06 0.63±0.08BM 43∈[42,45] 0.20±0.05−0.01±0.06BD 46∈[43,47] 0.14±0.06 0.07±0.06SA 30∈[29,31] 0.11±0.06 0.09±0.06SB 24∈[23,27] 0.13±0.06 0.07±0.08SC 9∈[9,9]−0.02±0.04−0.03±0.05SD 45∈[43,47] 0.14±0.06 0.02±0.07SE 50∈[49,51] 0.16±0.05 0.05±0.07SF 65∈[64,67] 0.20±0.06 0.04±0.06SG 69∈[67,70] 0.39±0.07−0.09±0.08SH 46∈[44,49] 0.09±0.07−0.21±0.07SI 40∈[39,43] 0.16±0.06 0.04±0.08SJ 51∈[50,53] 0.27±0.06−0.03±0.05SK 37∈[35,40] 0.18±0.06−0.06±0.09SL 51∈[50,51] 0.05±0.05 0.07±0.06The addition to this year’s challenge is the introduction of theappropriateness metric for the main agent’s reaction to the inter-locutor’s behaviour. The study itself is similar to the previous onewith changing interlocutor’s motion. It is also conducted whilethe main agent is silent. Surprisingly, using the interlocutor’s mo-tion features yields better results ( 0.07±0.06MAS in Table 1) andsignificantly better than a chance (section 4.7 in [15]).Overall, our system shows promising results, more on human-likeness and appropriateness for the interlocutor. However, thereare ways to improve this approach by adding more compelling au-dio features or adding teacher forcing to make attention to speechfeatures. Nevertheless, using DeepPhase embeddings allow us totrain the model without suffering converging to a rest pose. Addi-tionally, VQ-VAE codebook encoding allowed the resulting solutionto accord the dyadic setup of conversation and generate plausiblereactions to interlocutor behaviour.6 CONCLUSIONSharing approaches between different tasks in the domain of mo-tion generation could significantly improve the overall state ofthe research community. Our system is based on an approach thatproved itself as a neural motion controller and showed promisingresults during evaluation. We assume that using periodic propertiesof motion could yield improvements in all problems connected withanimation. And DeepPhase embeddings are one of the latest andmost successful approaches to extract these properties, so we rec-ommend considering them as well as VQ-VAE codebook encodingduring the development of future models.Despite that our system showed relatively good results in thechallenge, there is room for improvement. For example, a betterspeech encoder or additional data filtering could be used. Themixture-of-experts framework could also be extended to work withsequences. Some teacher-forcing techniques could also be applied.The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, France
ZskwQe5LHpC
Review of DeepPhase for conversational gestures generation
7: Good paper, accept
The proposed method adapts DeepPhase embedding and VQ-VAE codebook encoding for conversational gestures generation. It is intersting to represent motion's data with the DeepPhase. Unfortunately, with an average human-likeness of approximately 50%, and the agent's speech appropriateness performing worse than chance, the proposed approach lacked the capability to generate natural gesture motions effectively. The absense of synthesized gesture videos makes it unclear to assess the performance accurately. The results section lacks a comprehensive and detailed discussion of the findings. It remains unclear why the model yields poor appropriateness results. Adding an ablation study could really help. Reproducibility: Lines 80-81 The authors release the code.
3: The reviewer is fairly confident that the evaluation is correct
pVBKLqpAUtP
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation
["Vladislav Korzun", "Anna Beloborodova", "Arkady Ilin"]
This paper describes FineMotion's entry to the GENEA Challenge 2023. We explore the potential of DeepPhase embeddings by adapting neural motion controllers to conversational gesture generation. This is achieved by introducing a recurrent encoder for control features. We additionally use VQ-VAE codebook encoding of gestures to support dyadic setup. The resulting system generates stable realistic motion controllable by audio, text and interlocutor's motion.
["embodied agents", "neural networks", "gesture generation", "social robotics", "deep learning", "phase manifold"]
ABSTRACTThis paper describes FineMotion’s entry to the GENEA Challenge2023. We explore the potential of DeepPhase embeddings by adapt-ing neural motion controllers to conversational gesture generation.This is achieved by introducing a recurrent encoder for control fea-tures. We additionally use VQ-VAE codebook encoding of gesturesto support dyadic setup. The resulting system generates stable real-istic motion controllable by audio, text and interlocutor’s motion.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSembodied agents, neural networks, gesture generation, social ro-botics, deep learning, phase manifoldACM Reference Format:Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2023. The FineMo-tion entry to the GENEA Challenge 2023: DeepPhase for conversationalgestures generation. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23), October 9–13, 2023, Paris, France. ACM, New York,NY, USA, 6 pages. https://doi.org/10.1145/3577190.36161191 INTRODUCTIONThe automatic generation of conversational gestures for 3D humanmodels is one of the most opportune problems in character anima-tion. It can be used to simplify video game production and increasethe realism of characters’ movements. Furthermore, as visual as-sistants or VTubers are becoming more popular, the demand forrealistic gestures for embodied virtual agents is also growing.The task of automatic gesture generation from speech has gotseveral promising solutions. During GENEA Challenge 2022 [ 25]one of the approaches was rated even better than real motion cap-ture data by motion quality [ 27]. However, the task at hand isbecoming more complicated year by year.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.3616119The current GENEA Challenge 2023 [ 15] considers a dialoguesetup. Thus, the participants’ systems should not only considerinput speech but also the conversation partner’s behaviour. As wellas in the previous year «Talking With Hands 16.2M» dataset [ 16]was used, but now each sample contains two sets of motion, audioand text for the main agent and the interlocutor.In relative tasks of condition-based motion generation [ 23] andcharacter controllers [ 26] researchers propose slightly different ap-proaches, that could also benefit conversational gestures generation.One of the most promising approaches for animation representationwas presented in [ 19]. Taking into account that motion curves couldbe considered as periodic functions, they could be decomposed viaFourier Transform to obtain high-level features.Thus, we decided to examine the phase manifold formed by Deep-Phase’s Periodic AutoEncoder in conversational gesture generation.In order to properly address the dyadic setup of the challenge, weimplemented additional interlocutor gesture representation basedon VQ-VAE codebook encoding. Evaluation [ 15] showed that oursystem generates realistic motion which is statistically suitable forthe interlocutor’s behaviour. However, our system showed poorresults on appropriateness for speech, which suggests the needfor further development. Our code along with video examples ofgenerated motion is publicly available1to help other researchersreproduce our results.Our paper is organized as follows: Section 2 gives an appropriateoverview of related work; Section 3 describes our approach gen-erally; Section 4 details generator model input and output format;Section 5 gives results from the evaluation and discusses our results;and Section 6 is for the conclusion.2 RELATED WORKIn this section, we give a general overview of recent conversationalgesture generation approaches. Then we describe some existingapproaches for solving close tasks, that inspired our solution.2.1 Conversational gestures generationThe task of conversational gestures generation has been advanc-ing for several years. Starting from window-based frame-by-framegeneration [ 13] end-to-end approaches lead to auto-regression [ 14].Later, the GENEA Challenge 2022 offered many successful systems.Some of them are based on recurrent models [ 4,6,24], and some1https://github.com/FineMotion/GENEA_2023ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.even utilise GPT-like large architectures [ 18], but the most success-ful hybrid approach was presented in [ 27], where authors use thegraph-based model to transfer between short clips.Slightly weaker results were shown by clear auto-regressiveapproaches [ 11,12], that faced the main shortcoming of such ar-chitectures - converging to mean pose. In [ 12] as well as in [ 14]authors tried to overcome this problem by adding different teacher-forcing techniques to force models first to extract appropriate audiorepresentation. However, auto-regressive approaches have shownsignificant success without such techniques in a different task: char-acter controllers.2.2 Character controllersThe task of creating automatic character controllers is related tolocomotion movements [ 8]. The controlled character should movejoints with respect to the environment and user input. Many data-driven character controller approaches use a mixture-of-experts[10] framework, for example, Mode Adaptive Neural Networks(MANN) [26].Later, the MANN model was improved with local phases [ 20]. Lo-cal phases are computed as a derivative from block function contain-ing binary states of whether bone contacts the object/environment.The efficiency of the proposed approach was demonstrated in cre-ating a neural motion controller for a basketball game, where theblock function represented a player’s contact with the ball or thefloor.Finally, in [ 19] the unsupervised approach for automatic phaseextraction was suggested. The proposed Periodic AutoEncoderextracts periodic features from motion curves after training onunstructured motion datasets. The architecture utilizes a tempo-ral convolutional autoencoder [ 9] additionally applying real FastFourier Transform to each channel of latent space. The obtainedperiodic features then were used to train the motion controller asbefore showing the capability of extracted features.2.3 Text-to-Gesture Animation GenerationThe task of generating human gesture animations from textualprompts involves generating expressive and natural-looking ges-tures that correspond to a given textual input. For example, in thework of [ 7] the authors suggest jointly encoding gestures, text andimages into a single latent space using Contrastive Language-ImagePretraining (CLIP) [ 2]. Also, in GestureDiffuCLIP [ 21] the authorscombined the power of CLIP and diffusion models to generaterealistic and diverse gesture animations from text. To enable theencoding and decoding of gestures, the Vector Quantized Varia-tional Autoencoder (VQ-VAE) [ 1] was used. Additionally, VQ-VAEhas proven to be a valuable tool beyond text-to-gesture generation.In the context of conversational gestures, recent research [ 18] and[22] applied the VQ-VAE to encode and decode gestures, achievingimproved gesture generation performance.3 SYSTEM OVERVIEWOur approach follows the original DeepPhase paper [ 19]. It containstwo main stages: training Periodic AutoEncoder to extract phase fea-tures and building neural motion controller upon extracted phases.The motion controller is based on a mixture-of-experts frameworkalso mentioned in the DeepPhase paper with some ideas from pre-vious author’s work [ 20]. The main difference between our systemand those mentioned above is that we use an auxiliary recurrentControl Variables Encoder to guide motion by audio, text and inter-locutor’s motion instead of the user’s input. Apart from that, wetrained an additional encoder for the interlocutor’s motion and sup-plemented control features with the obtained latent representation.3.1 DeepPhase embeddingsTo prepare the phase manifold we follow the proposed pipelinefrom [ 19] exactly. To train Periodic AutoEncoder (PAE) we firstextract positions from the main agent’s motion data. We use allmotion files, but extract positions for 26 joints, including world rootand excluding fingers. Then we calculate joint position velocitiesand smooth them via Butterworth Filter [3].The training configuration of PAE is as follows: training samplecontains 61 frames and covers a 2-second window with 26*3 chan-nels. The number of latent channels (phases) is equal to 8, followingthe dancing pipeline from the official repository2. The number ofintermediate channels is equal to the number of joints. The model istrained during 150 epochs with batch size equal to 512 and AdamWoptimizer with Cyclic Learning Rate Scheduler with Restarts [ 17]with weight decay and learning rate both equal to 10e-4, restartperiod equal to 10, multiplier equal to 2 and cosine policy.The obtained model extracts phase features as in the originalpaper. From each time window tit extracts amplitude ( A), frequency(F), offset (B) and phase shift ( S).A,F,B,S∈RM, whereM- numberof latent channels (or phases). Phase manifold P∈R2Mfor frametis computed byP(t)2i−1=A(t)i·sin(2π·S(t)i),P(t)2i=A(t)i·cos(2π·S(t)i).(1)To obtain phase features P∈RT×2Mfrom motion with length Twe just extract the phase manifold from the sliding window, i.e.P={P(t)|t∈ [1,T]}. In order to illustrate the periodicity ofextracted phase features the Figure 1 shows them separated bylatent channel on a 10-second sample.Figure 1: Extracted phase features exampleDue to the fact that PAE is trained on joint velocities, obtainedphases can not be used as intermediate representations of motionsinstead of original data to train motion generator. The problem liesin the difficulty of converting joint positions into joint rotationswithout the introduction of kinematic constraints. To overcomethis we also tried to train PAE on joints rotations. Unfortunately,obtained phase manifold does not look like periodic function as2https://github.com/sebastianstarke/AI4AnimationThe FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, Francebefore. PAE trained on angle velocities could theoretically showsbetter results, but we decided to stop on phase manifold trained onjoint velocities.3.2 Generation modelOur motion generation model extends the mixture-of-experts frame-work from [ 19]. It contains two feedforward neural networks: Gat-ing Network and Motion Prediction Network. The model’s notationfollows [20].The Gating Network is built upon a stack of linear layers withELU[ 5] activations between them. It takes phase features and pre-dicts weights for experts. In our case, there are 8 experts. Then,the Motion Prediction Network uses these weights to make linearcombinations over experts. The Motion Prediction Network itselfconsists of several "Expert Layers" with ELU activations betweenthem. Each of layer Euses experts weights α={αi,i∈[1,N]}andinputxas follows:E(x,α)=N∑︁1αi(Wix+bi) (2)whereWi∈Rh×mandbi∈Rhare weights and biases respec-tively withmandhbeing input and output dimensions respectively.As in the original DeepPhase repository, the number of "Expertlayers" as well as the number of linear layers on the Gating Networkis equal to 3.3.3 Control Variables EncoderInitially, the input and output data formats were similar to [ 20].However, significant changes were introduced. As control variablesinput, we use a similar time window of audio features. But the morecontrol features like text and interlocutor’s pose we added, thelarger the control variables vector would become. So we decided toadd an additional recurrent encoder of control features based onBi-directional GRU over the FeedForward Highway as in [ 12] toshorten this vector. It takes time-window features around the cur-rent frame and returns the output vector from RNN correspondingto the considered frame.3.4 Interlocutor Gesture EncoderModel. To effectively respond to the gestures of the interlocutor,our model leverages the Interlocutor Gesture Encoder, a crucialcomponent based on the VQ-VAE framework from [ 1]. This modelshowed good results in gesture coding, as shown in [ 18] and [ 22].The Interlocutor Gesture Encoder enables us to encode high-qualityrepresentations of gestures into compact vectors.For better learning, we have added improvements such as ex-ponential codebook smoothing and discarding unused vectors, assuggested in the original article.Data processing. To train the VQ-VAE model, we segment ges-tures into gaps according to the bits in the audio. This idea wasproposed in the [ 22]. The authors proposed dividing gestures intosegments that align with the rhythmic structure of the audio, asit is believed to capture the salient aspects of the gestures. Themaximum number of frames in one gesture’s sample with this ap-proach is equal to 18. This approach has shown promising resultsin capturing the temporal dynamics and synchronizing gestureswith the corresponding audio cues. Building upon this concept, weadopt a similar data processing strategy in our study to leveragethe benefits of aligning gestures with the rhythmic elements of theaudio. During training, the network is fed with only those gesturesamples from both partners in which at least one conversationalpartner was speaking. Each selected sample corresponds to thespeaker’s audio bits. During inference, we feed only interlocutor’sgestures corresponding to the active speaking person’s audio bits.In order to determine the moments of speech, we use a text tran-script. If there is no active speaker at the moment, main agent’saudio bits are chosen for guidance.Training. We train the VQ-VAE model with codebook size 2048.The dimensional of codebook vectors was 256. Codebook occupancyreaches 70%. The model was trained over 152 epochs.Inference. To feed the interlocutor’s gestures into the main model,we split the interlocutor’s audio into bits, then we extract vectorsfor each sample. After that, we duplicate each vector to the size ofa bit. Thus, we get the number of vectors equal to the number offrames in the original gesture.4 GENERATOR INPUTS AND OUTPUTSFigure 2: Generator modelThe overall system is illustrated in Figure 2. The model takes theinformation from the current frame and predicts the next frame.We use a notation of a time series window similar to [ 20], i.e.Tt1t0represents features collected within a time window t0≤t≤t1.Following is the description of the final data formats.Inputs. Generator’s input consists of 3 components XSi,XAi,XPi.Character state XSioni-th frame consists of concatenated jointsrotations and velocities. We also initially used joint positions, butwe observed that the model is more stable without them. We repre-sent joint rotations via 6D continuous representation from [ 28] toeliminate cases when Euler’s angles have values equal to 0 or 180degrees. Joint velocities were preliminary smoothed as in the PAEtraining routine. It’s also worth mentioning that character stateand phases were preliminary normalized.Control variables XAiare time-windowT1s−1sfeatures aroundthe current frame, which is passed to Control Variables Encoderto obtain one control vector XCi, which will be concatenated withcharacter state as the main input to Motion Prediction Network.As initial control features, we extract 26 MFCCs from audio, GloVeembedding of size 50 and obtained codebook encoding from VQ-VAE with respect to motion frame rate which is equal to 30 FPS.ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.To align text and interlocutor’s features we distribute them evenlywithin frames corresponding to time span. We also tried othercombinations, including interlocutor’s speech, but they showedless stable results. We decided to make the dimension of XCiequaltoXSi.Motion Phases XPi=Θi∈R2KTare extracted phase featuresvia PAE uniformly sampled from time-window T1s−1sand concate-nated into one vector, i.e. Θi={P(i−30),...,P(i−5),P(i),P(i+5),...,P(i+30)}considering that 13 frames are sampled in the win-dow.Outputs. Our Motion Prediction Network output contains only 2components: the next frame character state YSi+1, which is similarto input one, and future motion phases YPi+1={Θi+1,ΔΘi+1}con-taining not only phases, but phases’ velocity for time-window T1s0swith respect to frame i+1, i.e =Θi+1={P(i+1),P(i+6),...,P(i+31)}with 7 frames total.Training. The model is trained to predict the next frame basedon the current frame, it does not use outputs from the previous step- every frame is taken from the dataset directly and is processedindependently. All parts of the generator are trained simultaneouslyend-to-end during 50 epochs with batch size equal to 2048 and adefault Adam optimizer with a learning rate equal to 10e-4. Thehidden sizes of the Gating Network and the Motion PredictionNetwork are 64 and 1024 respectively.Inference. Finally, during inference, our model predicts the nextframe based on the previous one and follows an auto-regressivefashion. We also blend phases between iterations, before passingthem to the next step: Θ′i+1=λΘi+1+(1−λ)(Θi+ΔΘi+1)withλ=0.5.5 RESULTS AND DISCUSSIONAs in previous challenges, organizers provided a comprehensivehuman evaluation of participating systems[ 15]. This time 3 mainsubjective measures are considered: human likeness, appropriate-ness to speech and appropriateness to the interlocutor’s behaviour.Human-likeness estimates the overall quality of generated mo-tion without taking into account the agent’s speech or interlocutor’sbehaviour. Our approach, indexed SL, shows competitive results(median score is 51∈[50,51]in Table 1) indicating the ability ofDeepPhase embeddings to maintain periodicity and as a result therealism of predicted motion. Although our model is rated ratherwell, it does not reach the quality of natural motions or state-of-the-art approaches.In order to estimate the appropriateness of agent speech, evalu-ation participants were given two motion clips generated by onemodel using separate audio samples and tasked to distinguish whichof the two motion clips corresponds to the target listening sample.Good models generate motions that participants could easily deter-mine from one another by audio. The main quantity of interest inthe appropriateness evaluation is the mean appropriateness score(MAS). Unfortunately, our model provides poor appropriatenessresults ( 0.05±0.05MAS in Table 1). Organizers mentioned (section3.6 in [ 15]) that our solution does not statistically differ from chanceperformance. This leads us to suspect the weakness of used audioand text features.Table 1: Summary statistics of studiesCondi- Human-Likeness Agent Speech Interlocutortion Median Score MAS MASNA 71∈[70,71] 0.81±0.06 0.63±0.08BM 43∈[42,45] 0.20±0.05−0.01±0.06BD 46∈[43,47] 0.14±0.06 0.07±0.06SA 30∈[29,31] 0.11±0.06 0.09±0.06SB 24∈[23,27] 0.13±0.06 0.07±0.08SC 9∈[9,9]−0.02±0.04−0.03±0.05SD 45∈[43,47] 0.14±0.06 0.02±0.07SE 50∈[49,51] 0.16±0.05 0.05±0.07SF 65∈[64,67] 0.20±0.06 0.04±0.06SG 69∈[67,70] 0.39±0.07−0.09±0.08SH 46∈[44,49] 0.09±0.07−0.21±0.07SI 40∈[39,43] 0.16±0.06 0.04±0.08SJ 51∈[50,53] 0.27±0.06−0.03±0.05SK 37∈[35,40] 0.18±0.06−0.06±0.09SL 51∈[50,51] 0.05±0.05 0.07±0.06The addition to this year’s challenge is the introduction of theappropriateness metric for the main agent’s reaction to the inter-locutor’s behaviour. The study itself is similar to the previous onewith changing interlocutor’s motion. It is also conducted whilethe main agent is silent. Surprisingly, using the interlocutor’s mo-tion features yields better results ( 0.07±0.06MAS in Table 1) andsignificantly better than a chance (section 4.7 in [15]).Overall, our system shows promising results, more on human-likeness and appropriateness for the interlocutor. However, thereare ways to improve this approach by adding more compelling au-dio features or adding teacher forcing to make attention to speechfeatures. Nevertheless, using DeepPhase embeddings allow us totrain the model without suffering converging to a rest pose. Addi-tionally, VQ-VAE codebook encoding allowed the resulting solutionto accord the dyadic setup of conversation and generate plausiblereactions to interlocutor behaviour.6 CONCLUSIONSharing approaches between different tasks in the domain of mo-tion generation could significantly improve the overall state ofthe research community. Our system is based on an approach thatproved itself as a neural motion controller and showed promisingresults during evaluation. We assume that using periodic propertiesof motion could yield improvements in all problems connected withanimation. And DeepPhase embeddings are one of the latest andmost successful approaches to extract these properties, so we rec-ommend considering them as well as VQ-VAE codebook encodingduring the development of future models.Despite that our system showed relatively good results in thechallenge, there is room for improvement. For example, a betterspeech encoder or additional data filtering could be used. Themixture-of-experts framework could also be extended to work withsequences. Some teacher-forcing techniques could also be applied.The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, France
uuhTnMJApaw
The paper utilizes techniques such as VQ-VAE codebook encoding and phase features to improve the accuracy and human perceptibility of generated gestures, and shows innovation and potential by considering interlocutor information. Although lacking in more detailed experimental analysis, I recommend accepting the paper after revisions.
6: Marginally above acceptance threshold
Abstract: The authors present an innovative method that utilizes DeepPhase embeddings and VQ-VAE codebook encoding to generate stable and realistic gestures suitable for conversational agents. The authors employ the Periodic AutoEncoder from DeepPhase to generate gestures and leverage VQ-VAE codebook encoding to extract cyclic properties, facilitating interaction between the conversational agent and the speaker. Furthermore, the authors evaluate their method in the GENEA Challenge 2023 with promising results. Review Feedback: (1)Does the Interlocutor feature used in the article only exist in the codebook embedding? What if there is no Interlocutor information available? (2)How is the gating network trained to obtain experts weights? How should we understand the mentioned experts weights and what is their significance? How does this branch affect the final results? (3)How are the input experts weights, control vector, and Character state fused in the Motion Prediction Network? (4)How do different Interlocutor control signals affect the generation of gestures for the speaker? (5)In Table 1, the experimental results for Agent speech are similar to random results, suggesting that the model's learning ability is mediocre without the codebook feature. Have you considered using techniques like diffusion as the generation backbone network for better results? I hope the above feedback can assist you in improving your work.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
swc28UDR8Wk
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models
["Weiyu Zhao", "Liangxiao Hu", "Shengping Zhang"]
This paper describes the DiffuGesture entry to the GENEA Challenge 2023. In this paper, we utilize conditional diffusion models to formulate the gesture generation problem. The DiffuGesture system generates human-like gestures from the two-person dialogue scenario, which are responsive to the interlocutor motions and accompany with the input speech. DiffuGesture system is built upon the recent DiffGesture [39]. Specifically, we introduce a lightweight transformer encoder to fuse the temporal relationships between human gestures and multi-modal conditions. Moreover, we adopt implicit classifier-free guidance to trade off between diversity and gesture quality. According to the collective evaluation released by GENEA Challenge 2023, our system demonstrates strong competitiveness in the appropriateness evaluation.
["gesture generation", "diffusion models", "neural networks"]
ABSTRACTThis paper describes the DiffuGesture entry to the GENEA Chal-lenge 2023. In this paper, we utilize conditional diffusion models toformulate the gesture generation problem. The DiffuGesture sys-tem generates human-like gestures from the two-person dialoguescenario, which are responsive to the interlocutor motions and ac-company with the input speech. DiffuGesture system is built uponthe recent DiffGesture [ 39]. Specifically, we introduce a lightweighttransformer encoder to fuse the temporal relationships betweenhuman gestures and multi-modal conditions. Moreover, we adoptimplicit classifier-free guidance to trade off between diversity andgesture quality. According to the collective evaluation released byGENEA Challenge 2023, our system demonstrates strong competi-tiveness in the appropriateness evaluation.CCS CONCEPTS•Computing methodologies →Animation ;Neural networks ;•Human-centered computing →Virtual reality .KEYWORDSgesture generation, diffusion models, neural networksACM Reference Format:Weiyu Zhao, Liangxiao Hu∗, and Shengping Zhang. 2023. DiffuGesture:Generating Human Gesture From Two-person Dialogue With DiffusionModels . In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERAC-TION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM, NewYork, NY, USA, 7 pages. https://doi.org/10.1145/3610661.36165521 INTRODUCTIONHuman gestures serve as a distinct mode of communication in dailyconversations, which assists the speakers in conveying semanticinformation more effectively and facilitates interpersonal commu-nication. [ 21,29]. Therefore, generating realistic co-speech humangestures from conversations plays a crucial role in achieving im-proved interaction between virtual entities and humans. Our goal*Corresponding author.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616552is to generate co-speech human gestures from the two-person dia-logue. However, generating human gestures with multi-modal datasuch as audio, text, and conversational cues in two-person dialogueremains a challenging and unresolved problem.Early research in data-driven co-speech gesture generation ap-proaches often relies on statistical analysis. Levine [ 16] et al. utilizeprobabilistic models to establish the relationship between audio andgestures. In recent years, deep learning methods have been increas-ingly applied in co-speech gesture generation. Kucherenko [ 12] etal. and Yoon [ 34] et al. employ the multi-layer perceptron (MLP) andrecurrent neural network (RNN) methods to generate deterministichuman gestures, respectively. However, these approaches do notadequately address the implicit mapping between the data and ges-tures [ 13]. To achieve more diverse and personalized gesture move-ments and improve the mapping between data and gestures, thereemerge methods using GAN [ 3,25,30], diffusion models [ 27,32,39]and VQ-VAE [20, 22].However, these methods mainly focus on single-person co-speechgesture generation. In this paper, we present a novel approach forco-speech human gesture generation in the two-person dialoguescenario. Specifically, given the behavior of the interlocutor andthe audio and textual transcriptions of the main agent, we generatethe reaction and co-speech movements of the main agent, respec-tively. Inspired by [ 39], we adopt conditional diffusion models forco-speech gesture generation from the two-person dialogue. Specif-ically, we introduce a lightweight transformer encoder to enhancethe contextual relevance between human gestures and multi-modalconditions. Finally, we introduce implicit classifier-free guidanceto trade off between diversity and gesture quality.The main contributions of our work are:•We present an early attempt to utilize conditional diffusion mod-els for co-speech human gesture generation from two-persondialogue, which generates impressive co-speech gesture move-ments.•We introduce a lightweight transformer encoder that effectivelyfuses the temporal relationships between human gestures andmulti-modal conditions.2 RELATED WORKIn this section, we will discuss the previous work in the fields ofgesture generation and diffusion model generation.2.1 Data-driven Gesture GenerationThe data-driven approach to gesture generation has found extensiveapplications across various domains.In recent years, researchershave utilized audio [ 6,17,18,22], transcribed text [ 3,10,23,26,27,36], and multimodal data [ 2,19,33] to drive gesture generation. TheICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.use of audio-driven gesture generation is quite common in variousapplications. For example, Ginosaret et al. [ 6] utilize an adversarialdiscriminator to regress gestures from audio. Qian et al. [ 22] employconditional learning to achieve audio-driven gesture generation,alleviating the ambiguity in simultaneous speech and gesture syn-thesis. Audio2gestures [ 18] and DanceFormer [ 17] use a variationalautoencoder [ 11] and Transformer [ 28], respectively, to generategestures from audio. Text-driven motion synthesis can be seen aslearning a joint embedding of the text feature space and the motionfeature space[ 22]. Text2gestures [ 3] establishes the connection be-tween text and gesture actions using a transformer. T2M-GPT [ 36]and MotionGPT[ 10], built upon generative pre-trained transformer(GPT), treat gesture actions as a language and utilize VQ-VAE totransform text into gesture actions. MDM [ 27] and MotionClip [ 26]preprocess transcribed text using CLIP[ 23] to establish the conver-sion between action and text embeddings.Recently, there has been an increasing trend in co-speech ges-ture generation to use multimodal data, including audio, text, andspeaker ID. Yoon et al. [ 33] proposed a model that combines multi-modal context and adversarial training to generate gestures thatresemble human-like movements and are synchronized with thespeech content and rhythm. Rhythmic Gesticulator [ 2] is the firstmodel to use neural networks to establish the relationship betweengestures and audio in terms of rhythm and semantics. HA2G [ 19]leverages contrastive learning strategies to fully utilize the richconnections between speech audio, text, and human gestures, re-sulting in the generation of realistic gesture movements. However,none of the aforementioned works considered the influence of otherindividuals in dyadic conversations on the embodied agents.2.2 Diffusion ModelsDiffusion models are a type of probabilistic generative model basedon stochastic processes [ 8], where initial data points graduallyevolve towards the target distribution through a diffusion processat each time step. Dhariwal et al. [ 5] introduce classifier guidance toimprove sample quality and generate higher-quality results. Then,the introduction of the Classifier-Free Guidance [ 9] eliminates theneed for explicit classification models and supports more open-ended and exploratory generation in various tasks. Diffusion modelshave recently been widely applied in various fields, such as imagegeneration [24], 3D shape generation [31], video generation [7].More recently, in the context of gesture generation tasks, dif-fusion generative models [ 1,27,37,39] have also been employedfor co-speech gesture generation. Inspired by the work of DiffGes-ture [ 39] in 2D gesture generation, we have developed a frameworkfor generating 3D gesture poses from multimodal data in a two-person dialogue scenario.3 METHODGiven the behavior of the interlocutor and the audio and textualtranscriptions of the main agent, our goal is to generate the listeningreactions and co-speech motions simultaneously. The architectureof our system is depicted in Figure 1(a). We first introduce theproblem definition in Section 3.1. Then we present the diffusionprocess and reverse process for gesture generation in Section 3.1.Finally, we develop a transformer encoder to fuse the temporalrelationships between human gestures and multi-modal conditionsin Section 3.3.3.1 Problem DefinitionGiven the sequences of 3D full-body motions, we represent them asx={p1,p2,p3,...,pn}∈RN×3J,Nrepresents the sequence lengthandJdenotes the total joint number. The reverse denoising processGof the diffusion model is parameterized by θto synthesize themain agent skeleton sequence xm, which is further conditionedon the multi-modal conditions Cand the initial poses of the pre-viousMframesxpre. The learning objective can be expressed asargminθxm−Gθ(C,xpre).3.2 Diffusion-based Gesture GenerationInspired by the previous work [ 39], we extend this model in thetwo-person dialogue scenario. Unlike generating 2D skeletal upper-body poses in [ 39], we synthesize the full-body human gestures ina two-person dialogue scenario.Diffusion Process. The diffusion process, also known as theforward process, is used to approximate the posterior distributionq(x1:T|x0). It gradually introduces Gaussian noise into the originaldistribution based on the variance sequence β1,...,βt, whereβi∈(0,1). The diffusion process is defined as follows:q(x1:Nt|x1:Nt−1)=N(√︁βtx1:Nt−1,(1−βt)I), (1)q(x1:T|x0)=TÖt=1q(x1:Nt|x1:Nt−1), (2)wherex1:Ntrepresents the main agent motion sequence {pm}Ni=1attdenoising step. Next, we will slightly abuse the use of letters and usexto represent x1:N. By progressively adding noise in this mannerto the original gesture motions x0, it approaches a distribution thatclosely resembles white noise.Reverse Process. The reverse process, also known as the gener-ation process, estimates the joint distribution pθ(x0:T). The reverseprocess of diffusion models also maintains the form of Gaussiantransition. Additionally, following the idea of classifier-free guid-ance, we train the model in both unconditional and conditionalgeneration settings to generate more realistic and diverse gesturemotions. The reverse process is defined as follows:pθ(x0:T)=pθ(xT)TÖt=1pθ(xt−1|xt,C), (3)where pθ(xt−1|xt,C)=N(xt−1;μθ(xt,t,C),∑︁θ(xt,t)).(4)Equation 4 represents the conditional generation and we set theconditionsCas zero (denoted as φ) for unconditional generationin the training stage. The corrupted noisy gesture sequence xtissampled by q(xt|x0).Traning loss. According to DDPM [ 8], the previous corruptedgesture sequence xt−1is defined as follows:xt−1=xt−√1− ̄αtˆε√ ̄αt, (5)where ̄αt=tÖi=11−βi. (6)DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: Overview of the Diffu2guesture framwork. In the preprocessing stage (yellow), we develop a condition encoder and apropose encoder to process multi-modal data and previous poses, respectively. Then we concatenate the two outputs together tocreate condition features C. In the training stage (green), we introduce classifier-free guidance to train the transformer encoder.In the sampling stage (pink), we start with random noise xTand generate a clean sample x0throughTdenoising steps.So we can denoise the Gaussian noise to the original gesture motiondistribution step by step. Then, we use the Mean Squared Error(MSE) loss to compute the loss between the estimated noise andthe actual noise at each time step [39]:Lsimple =Eqhε−εθ(√ ̄αtx0+√1− ̄αtε,C,t)2i. (7)Whereεθis the predicted Gaussian noise, and εrepresents theactual added noise. During the training process, we randomly maskthe conditions Cfor the unconditional setting.Sampling. Generating motion from speech is an implicit map-ping rather than a direct one-to-one correspondence between speechand gestures. To ensure a better correlation between audio andactions, we introduce classifier-free guidance [ 5]. From the perspec-tive of gesture generation, we can consider it as follows:GM=G(xt,φ,t)+s·(G(xt,C,t)−G(xt,φ,t)). (8)Wheresis a hyperparameter. As mentioned in the training losssection, during the training process, we utilize random masking tocreate unconditional input for training unconditional models. Then,we train a single transformer encoder and MLP layer under variousconditioning setups between conditional models and unconditionalmodels. This enables us to realize classifier-free guidance.Based on the aforementioned context, diffusion models can beused to generate natural embodied agent gestures in a two-persondialogue setting.3.3 Cross-Modal Attention EncodingGenerating 3D gesture poses using conditional diffusion models isdifferent from generating images. Both the pose sequence xand themulti-modal conditions Cexhibit strong temporal dependencies.Here, we need to establish a module to ensure that our results aretime-dependent. Unlike previous work in the GENEA 2022 chal-lenge that utilizes LSTM [ 4], VQVAE [ 20], and graph models [ 38],we employ a lightweight transformer encoder to encode Nframesof continuous motions and multi-modal data. We align the noisygesture sequence xtand multi-modal conditions Cin the time di-mension and treat each frame as a separate token. The time step tistreated as a separate token. We then utilize attention mechanismsfor encoding.Attention(Q,K,V)=softmax(QKT√︁dk)V. (9)ICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.WhereQ,K, andVare the query, key, and value matrix from inputtokens, in the multi-head attention mechanism.4 EXPERIMENT4.1 Data ProcessingThe only dataset we used is the GENEA Challenge 2023 [ 14] dataset,which is an extension of Lee et al. ’s Talking With Hands [ 15] dataset.The dataset includes participants consisting of a main agent (taskedwith generating motion) and an interlocutor (the other party in theconversation). The conversation data in the dataset is in dyadic form,providing audio and text transcriptions for both parties, speakerIDs, and motion. In the provided official data, each recorded con-versation is duplicated with flipped roles to augment the trainingdata.We fully leverage the various information available in the dataset,including the audio and transcribed text between the main agentand the interlocutor, as well as the speaker IDs. We follow thesame processing approach as the baseline [ 4] for handling audio,transcriptions, and human body joints. We obtain three audio fea-tures at a sampling rate of 44100: mel-spectrograms, MFCCs, andprosodies. The frames generated have a rate of 30 FPS and theirlength matches the duration of the motion sequence. We encodethe text using Fasttext, resulting in word vectors of dimension 300.Additionally, two extra dimensions are used to indicate whether thespeaker is silent or laughing. Furthermore, we define the identityinformation of each speaker using one-hot encoding.For the processing of motion data, we also select 25 joints, in-cluding the root node, which have a significant influence on skele-ton motion. These joints are represented in a dimension of 78. Togenerate high-quality motion sequences, we segment the motionsequence into chunks of 300 frames each, which serve as inputsto the diffusion process. To ensure continuity between adjacentmotion segments, we extract the preceding 50 previous poses aspart of the generation condition. After aligning the audio features,encoded text, identity information, and speakers’ motion sequencesin the temporal dimension, we obtain the same length as the motionsequences. Similarly, the previous pose is mapped to the correspond-ing dimension after being processed by the prepose encoder.4.2 EvaluationThe evaluation of our approach is conducted through subjectiveassessment by the organizers of the GENEA Challenge 2023 andother participating teams. The organizers recruit study participantsresiding in the UK, IE, USA, CAN, AUS, and NZ, who had Englishas their first language, via crowdsourcing platforms to performthe evaluations. Multiple attention checks are implemented dur-ing the experiment to ensure the participants’ engagement andattentiveness. The evaluation of this challenge consisted of threeaspects: human-likeness; appropriateness for agent speech;appropriateness for the interlocutor. The specific results arepresented in Table 1 and Table 2. The natural motion is labeled NA.Our method is labeled SBin the tables.Human-likeness. The study participants watch 8 to 10 secondsof video and rate the motion of the virtual character as human-like,independent of the dialogue content and the speaker. DiffuGestureperforms poorly on this metric.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 2: The bar plots display response distribution in ap-propriateness studies. The blue bar represents preferredmatched motion responses, and the red bar represents pre-ferred mismatched motion responses. The height of each barcorresponds to the fraction of responses in each category. Ontop of each bar is also a confidence interval for the mean ap-propriateness score, scaled to fit the current axes. The dottedblack line indicates chance-level performance. Conditionsare ordered by mean appropriateness score.Appropriateness for agent speech. This metric evaluateswhether the motion of the virtual character is appropriate for thegiven speech while controlling for the overall human-likeness ofthe motion [ 35]. During the testing process, study participants arepresented with a pair of videos, both from the same condition,where one video matches the specific speech and the other is froman unrelated speech. Both videos play the specific speech, and par-ticipants are asked to select the video they believe best matches thespeech.Appropriateness for the interlocutor. During the conver-sation process, both participants in the dialogue influence eachother. Therefore, this metric evaluates whether the motion of thevirtual character is appropriate for the given interlocutor’s behav-ior (including speech and motion) while controlling for the overallhuman-likeness of the motion. Study participants are also presentedwith a pair of videos, where the behavior of the main agent remainsfixed, but the behavior of the interlocutor is randomly replaced inDiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceTable 1: Summary statistics of user-study responses from both appropriateness studies, with confidence intervals for the meanappropriateness score (MAS) at the level α=0.05. “Pref. matched” identifies how often test-takers preferred matched motion interms of appropriateness after splitting ties. Conditions are ordered by mean appropriateness score.(a) Appropriateness for agent speechCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803(b) Appropriateness for the interlocutorCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 2: Summary statistics of user-study ratings from thehuman-likeness study, with confidence intervals at the levelα=0.05. Conditions are ordered by decreasing sample me-dian rating. Our entry is SB.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9one of the videos. Participants are then asked to select the videothat best matches the behavior of the interlocutor. DiffuGesturehas achieved promising results in this metric.5 DISCUSSIONAs shown in Table 1, we achieve satisfactory results in both met-rics of appropriateness for agent speech and the interlocutor. Ourscores for these two metrics are 0.13 and 0.07, respectively. For theappropriateness of the interlocutor, we achieve favorable results.The score of the “Preferred Matche” category is 51.8%. Furthermore,as shown in Figure 2(b), a considerable proportion of participantschose our results as their preferred matched motion responses. Webelieve that several factors contribute to these results. Firstly, wemake effective use of the provided information, including audio,transcribed text, and interlocutor behavior. Our data processingmethods have demonstrated their effectiveness. Additionally, theintroduced cross-modal attention encoder proves to be effective. Itenables us to adequately encode information from different modal-ities, thus generating plausible motions of the main agent withrespect to the behavior of the interlocutor.We also achieve unsatisfactory results in the human-likenessmetric, with a score of only 24. The challenge provides long-termhuman gesture sequences with variable lengths. Our naive diffu-sion models without specific designs only support generating fixed-length motion sequences. We segment the condition sequencesand simply predict 300 frames for each segment and concatenatethe predicted fixed-length motion sequences to generate the com-plete motions. This results in noticeable jitter at the junctions ofthe predicted fixed-length motion sequences. To eliminate the phe-nomenon, we also make some effort such as taking the previouslypredicted motions and acceleration between adjacent frames aspart of the conditions. Furthermore, we also increase the lengthof generated sequences to reduce the discontinuities of generatedmotions. However, these naive methods do not yield the expectedresults. The acceleration constraint reduces the richness of the gen-erated motions, making them less human-like. We also mentionthat the provided motion sequences for evaluation are not finaloptimized ones. This may cause undesired evaluation results.ICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.6 CONCLUSIONWe propose the DiffuGesture as described in this paper to partici-pate in the GENEA Challenge 2023. Based on conditional diffusionmodels, we develop a system that generates co-speech human ges-tures for the main agent in the two-person dialogue. In our system,we encode the features of audio, transcriptions, interlocutor behav-ior using a transformer encoder. Furthermore, we adopt classifier-free guidance to trade off between diversity and gesture quality. Theevaluation results show that DiffuGesture performs well in termsof appropriateness for the interlocutor metric. However, comparedto other systems participating in the challenge, it does not generatehigh-fidelity human-like motions effectively.In the future, we will continue to explore conditional diffusionmodels to generate high-fidelity co-speech human gestures in vari-ous scenarios. We aim to handle the generation of variable-lengthmotion sequences and reduce the distortion of motions at break-points. Additionally, we intend to investigate the incorporation ofsemantic supervision to aid in the generation of co-speech gestures.We will focus on these aspects in our future work.
f7616e2IuH
Introduces a cross-modal attention encoder to synchronize the align the pose sequence and the multi-modal conditions.
6: Marginally above acceptance threshold
The paper was easy to read and well written. The work is heavily based on a pre-existing DiffGesture by adding the ability to synchronize the temporal relashionships between human gestures and multi-modal conditions. The literature review seems adequate. The results seem to indicate that the pre-processing step was responsible for a reasonable good result in terms of gesture appropriateness for both the main agent and the interlocutor, however the naturalness scored quite low due to the frame generation capability of the system being capped at 300 frames. In general it seems like the authors provide some insights as to what their method may have positively added. However there isn't much description about how this cross-modal attention encoder actually works or how it can be replicated, only a very brief explanation in section 3.3. This seems to be a core piece of the contributions and feels like it could have received more "attention". There are a few things I would like to see clarified in the paper if it gets accepted, before the final submission: - Please elaborate on the limitation of 300 frames. How easy is it to remove that limit? The authors describe that they tried, however it results in a significant increase of jitter. Is there a fundamental problem with this architecture that you think may be related with that? If so, what do you think it is? - In paragraph 5 - Discussion, the authors mention "In the appropriateness for the interlocutor metric, only the NA condition has significantly higher scores than ours." - however that seems false given that I'm looking at Table 1(b) and can see SA with a score of 53.5%, above SB's 51.8%. - In the next sentence, they add "blue bar region indicates a higher proportion of participants choosing the correctly matched motion as their preference" - however, looking at SA in Figure 2(b), the blue bar represents about 30% of the total preference, so I don't think that the authors managed to convey what they meant.
3: The reviewer is fairly confident that the evaluation is correct
swc28UDR8Wk
ACM.org/ICMI/2023/Workshop/GENEA_Challenge
2023
DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models
["Weiyu Zhao", "Liangxiao Hu", "Shengping Zhang"]
This paper describes the DiffuGesture entry to the GENEA Challenge 2023. In this paper, we utilize conditional diffusion models to formulate the gesture generation problem. The DiffuGesture system generates human-like gestures from the two-person dialogue scenario, which are responsive to the interlocutor motions and accompany with the input speech. DiffuGesture system is built upon the recent DiffGesture [39]. Specifically, we introduce a lightweight transformer encoder to fuse the temporal relationships between human gestures and multi-modal conditions. Moreover, we adopt implicit classifier-free guidance to trade off between diversity and gesture quality. According to the collective evaluation released by GENEA Challenge 2023, our system demonstrates strong competitiveness in the appropriateness evaluation.
["gesture generation", "diffusion models", "neural networks"]
ABSTRACTThis paper describes the DiffuGesture entry to the GENEA Chal-lenge 2023. In this paper, we utilize conditional diffusion models toformulate the gesture generation problem. The DiffuGesture sys-tem generates human-like gestures from the two-person dialoguescenario, which are responsive to the interlocutor motions and ac-company with the input speech. DiffuGesture system is built uponthe recent DiffGesture [ 39]. Specifically, we introduce a lightweighttransformer encoder to fuse the temporal relationships betweenhuman gestures and multi-modal conditions. Moreover, we adoptimplicit classifier-free guidance to trade off between diversity andgesture quality. According to the collective evaluation released byGENEA Challenge 2023, our system demonstrates strong competi-tiveness in the appropriateness evaluation.CCS CONCEPTS•Computing methodologies →Animation ;Neural networks ;•Human-centered computing →Virtual reality .KEYWORDSgesture generation, diffusion models, neural networksACM Reference Format:Weiyu Zhao, Liangxiao Hu∗, and Shengping Zhang. 2023. DiffuGesture:Generating Human Gesture From Two-person Dialogue With DiffusionModels . In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERAC-TION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM, NewYork, NY, USA, 7 pages. https://doi.org/10.1145/3610661.36165521 INTRODUCTIONHuman gestures serve as a distinct mode of communication in dailyconversations, which assists the speakers in conveying semanticinformation more effectively and facilitates interpersonal commu-nication. [ 21,29]. Therefore, generating realistic co-speech humangestures from conversations plays a crucial role in achieving im-proved interaction between virtual entities and humans. Our goal*Corresponding author.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected] ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616552is to generate co-speech human gestures from the two-person dia-logue. However, generating human gestures with multi-modal datasuch as audio, text, and conversational cues in two-person dialogueremains a challenging and unresolved problem.Early research in data-driven co-speech gesture generation ap-proaches often relies on statistical analysis. Levine [ 16] et al. utilizeprobabilistic models to establish the relationship between audio andgestures. In recent years, deep learning methods have been increas-ingly applied in co-speech gesture generation. Kucherenko [ 12] etal. and Yoon [ 34] et al. employ the multi-layer perceptron (MLP) andrecurrent neural network (RNN) methods to generate deterministichuman gestures, respectively. However, these approaches do notadequately address the implicit mapping between the data and ges-tures [ 13]. To achieve more diverse and personalized gesture move-ments and improve the mapping between data and gestures, thereemerge methods using GAN [ 3,25,30], diffusion models [ 27,32,39]and VQ-VAE [20, 22].However, these methods mainly focus on single-person co-speechgesture generation. In this paper, we present a novel approach forco-speech human gesture generation in the two-person dialoguescenario. Specifically, given the behavior of the interlocutor andthe audio and textual transcriptions of the main agent, we generatethe reaction and co-speech movements of the main agent, respec-tively. Inspired by [ 39], we adopt conditional diffusion models forco-speech gesture generation from the two-person dialogue. Specif-ically, we introduce a lightweight transformer encoder to enhancethe contextual relevance between human gestures and multi-modalconditions. Finally, we introduce implicit classifier-free guidanceto trade off between diversity and gesture quality.The main contributions of our work are:•We present an early attempt to utilize conditional diffusion mod-els for co-speech human gesture generation from two-persondialogue, which generates impressive co-speech gesture move-ments.•We introduce a lightweight transformer encoder that effectivelyfuses the temporal relationships between human gestures andmulti-modal conditions.2 RELATED WORKIn this section, we will discuss the previous work in the fields ofgesture generation and diffusion model generation.2.1 Data-driven Gesture GenerationThe data-driven approach to gesture generation has found extensiveapplications across various domains.In recent years, researchershave utilized audio [ 6,17,18,22], transcribed text [ 3,10,23,26,27,36], and multimodal data [ 2,19,33] to drive gesture generation. TheICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.use of audio-driven gesture generation is quite common in variousapplications. For example, Ginosaret et al. [ 6] utilize an adversarialdiscriminator to regress gestures from audio. Qian et al. [ 22] employconditional learning to achieve audio-driven gesture generation,alleviating the ambiguity in simultaneous speech and gesture syn-thesis. Audio2gestures [ 18] and DanceFormer [ 17] use a variationalautoencoder [ 11] and Transformer [ 28], respectively, to generategestures from audio. Text-driven motion synthesis can be seen aslearning a joint embedding of the text feature space and the motionfeature space[ 22]. Text2gestures [ 3] establishes the connection be-tween text and gesture actions using a transformer. T2M-GPT [ 36]and MotionGPT[ 10], built upon generative pre-trained transformer(GPT), treat gesture actions as a language and utilize VQ-VAE totransform text into gesture actions. MDM [ 27] and MotionClip [ 26]preprocess transcribed text using CLIP[ 23] to establish the conver-sion between action and text embeddings.Recently, there has been an increasing trend in co-speech ges-ture generation to use multimodal data, including audio, text, andspeaker ID. Yoon et al. [ 33] proposed a model that combines multi-modal context and adversarial training to generate gestures thatresemble human-like movements and are synchronized with thespeech content and rhythm. Rhythmic Gesticulator [ 2] is the firstmodel to use neural networks to establish the relationship betweengestures and audio in terms of rhythm and semantics. HA2G [ 19]leverages contrastive learning strategies to fully utilize the richconnections between speech audio, text, and human gestures, re-sulting in the generation of realistic gesture movements. However,none of the aforementioned works considered the influence of otherindividuals in dyadic conversations on the embodied agents.2.2 Diffusion ModelsDiffusion models are a type of probabilistic generative model basedon stochastic processes [ 8], where initial data points graduallyevolve towards the target distribution through a diffusion processat each time step. Dhariwal et al. [ 5] introduce classifier guidance toimprove sample quality and generate higher-quality results. Then,the introduction of the Classifier-Free Guidance [ 9] eliminates theneed for explicit classification models and supports more open-ended and exploratory generation in various tasks. Diffusion modelshave recently been widely applied in various fields, such as imagegeneration [24], 3D shape generation [31], video generation [7].More recently, in the context of gesture generation tasks, dif-fusion generative models [ 1,27,37,39] have also been employedfor co-speech gesture generation. Inspired by the work of DiffGes-ture [ 39] in 2D gesture generation, we have developed a frameworkfor generating 3D gesture poses from multimodal data in a two-person dialogue scenario.3 METHODGiven the behavior of the interlocutor and the audio and textualtranscriptions of the main agent, our goal is to generate the listeningreactions and co-speech motions simultaneously. The architectureof our system is depicted in Figure 1(a). We first introduce theproblem definition in Section 3.1. Then we present the diffusionprocess and reverse process for gesture generation in Section 3.1.Finally, we develop a transformer encoder to fuse the temporalrelationships between human gestures and multi-modal conditionsin Section 3.3.3.1 Problem DefinitionGiven the sequences of 3D full-body motions, we represent them asx={p1,p2,p3,...,pn}∈RN×3J,Nrepresents the sequence lengthandJdenotes the total joint number. The reverse denoising processGof the diffusion model is parameterized by θto synthesize themain agent skeleton sequence xm, which is further conditionedon the multi-modal conditions Cand the initial poses of the pre-viousMframesxpre. The learning objective can be expressed asargminθxm−Gθ(C,xpre).3.2 Diffusion-based Gesture GenerationInspired by the previous work [ 39], we extend this model in thetwo-person dialogue scenario. Unlike generating 2D skeletal upper-body poses in [ 39], we synthesize the full-body human gestures ina two-person dialogue scenario.Diffusion Process. The diffusion process, also known as theforward process, is used to approximate the posterior distributionq(x1:T|x0). It gradually introduces Gaussian noise into the originaldistribution based on the variance sequence β1,...,βt, whereβi∈(0,1). The diffusion process is defined as follows:q(x1:Nt|x1:Nt−1)=N(√︁βtx1:Nt−1,(1−βt)I), (1)q(x1:T|x0)=TÖt=1q(x1:Nt|x1:Nt−1), (2)wherex1:Ntrepresents the main agent motion sequence {pm}Ni=1attdenoising step. Next, we will slightly abuse the use of letters and usexto represent x1:N. By progressively adding noise in this mannerto the original gesture motions x0, it approaches a distribution thatclosely resembles white noise.Reverse Process. The reverse process, also known as the gener-ation process, estimates the joint distribution pθ(x0:T). The reverseprocess of diffusion models also maintains the form of Gaussiantransition. Additionally, following the idea of classifier-free guid-ance, we train the model in both unconditional and conditionalgeneration settings to generate more realistic and diverse gesturemotions. The reverse process is defined as follows:pθ(x0:T)=pθ(xT)TÖt=1pθ(xt−1|xt,C), (3)where pθ(xt−1|xt,C)=N(xt−1;μθ(xt,t,C),∑︁θ(xt,t)).(4)Equation 4 represents the conditional generation and we set theconditionsCas zero (denoted as φ) for unconditional generationin the training stage. The corrupted noisy gesture sequence xtissampled by q(xt|x0).Traning loss. According to DDPM [ 8], the previous corruptedgesture sequence xt−1is defined as follows:xt−1=xt−√1− ̄αtˆε√ ̄αt, (5)where ̄αt=tÖi=11−βi. (6)DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: Overview of the Diffu2guesture framwork. In the preprocessing stage (yellow), we develop a condition encoder and apropose encoder to process multi-modal data and previous poses, respectively. Then we concatenate the two outputs together tocreate condition features C. In the training stage (green), we introduce classifier-free guidance to train the transformer encoder.In the sampling stage (pink), we start with random noise xTand generate a clean sample x0throughTdenoising steps.So we can denoise the Gaussian noise to the original gesture motiondistribution step by step. Then, we use the Mean Squared Error(MSE) loss to compute the loss between the estimated noise andthe actual noise at each time step [39]:Lsimple =Eqhε−εθ(√ ̄αtx0+√1− ̄αtε,C,t)2i. (7)Whereεθis the predicted Gaussian noise, and εrepresents theactual added noise. During the training process, we randomly maskthe conditions Cfor the unconditional setting.Sampling. Generating motion from speech is an implicit map-ping rather than a direct one-to-one correspondence between speechand gestures. To ensure a better correlation between audio andactions, we introduce classifier-free guidance [ 5]. From the perspec-tive of gesture generation, we can consider it as follows:GM=G(xt,φ,t)+s·(G(xt,C,t)−G(xt,φ,t)). (8)Wheresis a hyperparameter. As mentioned in the training losssection, during the training process, we utilize random masking tocreate unconditional input for training unconditional models. Then,we train a single transformer encoder and MLP layer under variousconditioning setups between conditional models and unconditionalmodels. This enables us to realize classifier-free guidance.Based on the aforementioned context, diffusion models can beused to generate natural embodied agent gestures in a two-persondialogue setting.3.3 Cross-Modal Attention EncodingGenerating 3D gesture poses using conditional diffusion models isdifferent from generating images. Both the pose sequence xand themulti-modal conditions Cexhibit strong temporal dependencies.Here, we need to establish a module to ensure that our results aretime-dependent. Unlike previous work in the GENEA 2022 chal-lenge that utilizes LSTM [ 4], VQVAE [ 20], and graph models [ 38],we employ a lightweight transformer encoder to encode Nframesof continuous motions and multi-modal data. We align the noisygesture sequence xtand multi-modal conditions Cin the time di-mension and treat each frame as a separate token. The time step tistreated as a separate token. We then utilize attention mechanismsfor encoding.Attention(Q,K,V)=softmax(QKT√︁dk)V. (9)ICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.WhereQ,K, andVare the query, key, and value matrix from inputtokens, in the multi-head attention mechanism.4 EXPERIMENT4.1 Data ProcessingThe only dataset we used is the GENEA Challenge 2023 [ 14] dataset,which is an extension of Lee et al. ’s Talking With Hands [ 15] dataset.The dataset includes participants consisting of a main agent (taskedwith generating motion) and an interlocutor (the other party in theconversation). The conversation data in the dataset is in dyadic form,providing audio and text transcriptions for both parties, speakerIDs, and motion. In the provided official data, each recorded con-versation is duplicated with flipped roles to augment the trainingdata.We fully leverage the various information available in the dataset,including the audio and transcribed text between the main agentand the interlocutor, as well as the speaker IDs. We follow thesame processing approach as the baseline [ 4] for handling audio,transcriptions, and human body joints. We obtain three audio fea-tures at a sampling rate of 44100: mel-spectrograms, MFCCs, andprosodies. The frames generated have a rate of 30 FPS and theirlength matches the duration of the motion sequence. We encodethe text using Fasttext, resulting in word vectors of dimension 300.Additionally, two extra dimensions are used to indicate whether thespeaker is silent or laughing. Furthermore, we define the identityinformation of each speaker using one-hot encoding.For the processing of motion data, we also select 25 joints, in-cluding the root node, which have a significant influence on skele-ton motion. These joints are represented in a dimension of 78. Togenerate high-quality motion sequences, we segment the motionsequence into chunks of 300 frames each, which serve as inputsto the diffusion process. To ensure continuity between adjacentmotion segments, we extract the preceding 50 previous poses aspart of the generation condition. After aligning the audio features,encoded text, identity information, and speakers’ motion sequencesin the temporal dimension, we obtain the same length as the motionsequences. Similarly, the previous pose is mapped to the correspond-ing dimension after being processed by the prepose encoder.4.2 EvaluationThe evaluation of our approach is conducted through subjectiveassessment by the organizers of the GENEA Challenge 2023 andother participating teams. The organizers recruit study participantsresiding in the UK, IE, USA, CAN, AUS, and NZ, who had Englishas their first language, via crowdsourcing platforms to performthe evaluations. Multiple attention checks are implemented dur-ing the experiment to ensure the participants’ engagement andattentiveness. The evaluation of this challenge consisted of threeaspects: human-likeness; appropriateness for agent speech;appropriateness for the interlocutor. The specific results arepresented in Table 1 and Table 2. The natural motion is labeled NA.Our method is labeled SBin the tables.Human-likeness. The study participants watch 8 to 10 secondsof video and rate the motion of the virtual character as human-like,independent of the dialogue content and the speaker. DiffuGestureperforms poorly on this metric.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 2: The bar plots display response distribution in ap-propriateness studies. The blue bar represents preferredmatched motion responses, and the red bar represents pre-ferred mismatched motion responses. The height of each barcorresponds to the fraction of responses in each category. Ontop of each bar is also a confidence interval for the mean ap-propriateness score, scaled to fit the current axes. The dottedblack line indicates chance-level performance. Conditionsare ordered by mean appropriateness score.Appropriateness for agent speech. This metric evaluateswhether the motion of the virtual character is appropriate for thegiven speech while controlling for the overall human-likeness ofthe motion [ 35]. During the testing process, study participants arepresented with a pair of videos, both from the same condition,where one video matches the specific speech and the other is froman unrelated speech. Both videos play the specific speech, and par-ticipants are asked to select the video they believe best matches thespeech.Appropriateness for the interlocutor. During the conver-sation process, both participants in the dialogue influence eachother. Therefore, this metric evaluates whether the motion of thevirtual character is appropriate for the given interlocutor’s behav-ior (including speech and motion) while controlling for the overallhuman-likeness of the motion. Study participants are also presentedwith a pair of videos, where the behavior of the main agent remainsfixed, but the behavior of the interlocutor is randomly replaced inDiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceTable 1: Summary statistics of user-study responses from both appropriateness studies, with confidence intervals for the meanappropriateness score (MAS) at the level α=0.05. “Pref. matched” identifies how often test-takers preferred matched motion interms of appropriateness after splitting ties. Conditions are ordered by mean appropriateness score.(a) Appropriateness for agent speechCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803(b) Appropriateness for the interlocutorCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 2: Summary statistics of user-study ratings from thehuman-likeness study, with confidence intervals at the levelα=0.05. Conditions are ordered by decreasing sample me-dian rating. Our entry is SB.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9one of the videos. Participants are then asked to select the videothat best matches the behavior of the interlocutor. DiffuGesturehas achieved promising results in this metric.5 DISCUSSIONAs shown in Table 1, we achieve satisfactory results in both met-rics of appropriateness for agent speech and the interlocutor. Ourscores for these two metrics are 0.13 and 0.07, respectively. For theappropriateness of the interlocutor, we achieve favorable results.The score of the “Preferred Matche” category is 51.8%. Furthermore,as shown in Figure 2(b), a considerable proportion of participantschose our results as their preferred matched motion responses. Webelieve that several factors contribute to these results. Firstly, wemake effective use of the provided information, including audio,transcribed text, and interlocutor behavior. Our data processingmethods have demonstrated their effectiveness. Additionally, theintroduced cross-modal attention encoder proves to be effective. Itenables us to adequately encode information from different modal-ities, thus generating plausible motions of the main agent withrespect to the behavior of the interlocutor.We also achieve unsatisfactory results in the human-likenessmetric, with a score of only 24. The challenge provides long-termhuman gesture sequences with variable lengths. Our naive diffu-sion models without specific designs only support generating fixed-length motion sequences. We segment the condition sequencesand simply predict 300 frames for each segment and concatenatethe predicted fixed-length motion sequences to generate the com-plete motions. This results in noticeable jitter at the junctions ofthe predicted fixed-length motion sequences. To eliminate the phe-nomenon, we also make some effort such as taking the previouslypredicted motions and acceleration between adjacent frames aspart of the conditions. Furthermore, we also increase the lengthof generated sequences to reduce the discontinuities of generatedmotions. However, these naive methods do not yield the expectedresults. The acceleration constraint reduces the richness of the gen-erated motions, making them less human-like. We also mentionthat the provided motion sequences for evaluation are not finaloptimized ones. This may cause undesired evaluation results.ICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.6 CONCLUSIONWe propose the DiffuGesture as described in this paper to partici-pate in the GENEA Challenge 2023. Based on conditional diffusionmodels, we develop a system that generates co-speech human ges-tures for the main agent in the two-person dialogue. In our system,we encode the features of audio, transcriptions, interlocutor behav-ior using a transformer encoder. Furthermore, we adopt classifier-free guidance to trade off between diversity and gesture quality. Theevaluation results show that DiffuGesture performs well in termsof appropriateness for the interlocutor metric. However, comparedto other systems participating in the challenge, it does not generatehigh-fidelity human-like motions effectively.In the future, we will continue to explore conditional diffusionmodels to generate high-fidelity co-speech human gestures in vari-ous scenarios. We aim to handle the generation of variable-lengthmotion sequences and reduce the distortion of motions at break-points. Additionally, we intend to investigate the incorporation ofsemantic supervision to aid in the generation of co-speech gestures.We will focus on these aspects in our future work.
tORcTBdoEx
Review of Diffu2Gesture
6: Marginally above acceptance threshold
Paper Summary: The paper presents a novel approach to gesture generation using diffusion models. The authors extend the work from the upper body to the full body, which seems to have resulted in a decrease in performance compared to the original DiffGesture. The paper discusses the use of shared MLP and Transformer models, but the specifics are not clearly explained. Relevance: The paper is relevant to the field of gesture generation, particularly in the context of using diffusion models. The extension of the work from upper body to full body is a significant contribution, despite the observed decrease in performance. Significance: The paper's significance lies in its novel approach to gesture generation using diffusion models. However, the results indicate that the quality of the generated gestures is not as good as expected, which raises questions about the effectiveness of the proposed method. Paper Strengths: The paper presents a novel approach to gesture generation using diffusion models. The extension of the work from the upper body to the full body is a significant contribution. Paper Weaknesses: The performance of the proposed method seems to be lower than the original DiffGesture. The paper could benefit from a more detailed explanation of the proposed method, particularly the shared MLP and Transformer models. The explanation of the principle, specifically the balance between diversity and quality, is not clear. The results indicate that the quality of the generated gestures is not as good as expected. The authors should discuss why the quality of the generated gestures is not as good as expected, despite the use of diffusion models. Further Comments: 1. the result seems not as good as the original DiffGesture, is it a problem because the original upper body work is extended to the whole body? 2. the specific method is not clearly stated, it is too simple, what is the meaning of share in Fig. 1, is MLP and Transformer the same one? 3. the principle is not clearly explained, the L276 is supposed to be in the conditional control for equilibrium, not diversity and quality? Diversity is controlled by the different noises of the inputs, and quality is determined by the architecture of the model itself. 4. 4. L535 According to Figure 3(a) of Appendix A1, besides NA, there are also SG and SJ which are significantly more than the proposed system? 5. L438 What do you mean by per second, is it per chunk or per segment? 6. L443 [300,932] Consider a different way of writing it, it looks like a reference.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
eBLV3i7PG1c
automl.cc/AutoML/2023/ABCD_Track
2023
ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments
["Iordanis Fostiropoulos", "Laurent Itti"]
Understanding the efficacy of a method requires ablation experiments. Current Machine Learning (ML) workflows emphasize the vertical scaling of large models with paradigms such as ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methods for horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensive when different tools are used for different experiment stages, such as for hyper-parameter optimization, distributed execution, or the consolidation of artifacts. We identify that errors in earlier stages of experimentation propagate to the analysis. Based on our observations, experimental results, and the current literature, we provide recommendations on best practices to prevent errors. To reduce the effort required to perform an accurate analysis and address common errors when scaling the execution of multiple experiments, we introduce ABLATOR. Our framework uses a stateful experiment design paradigm that provides experiment persistence and is robust to errors. Our actionable analysis artifacts are automatically produced by the experiment state and reduce the time to evaluate a hypothesis. We evaluate ABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effect of 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and 4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largest ablation experiment for tabular data on Transformer models to date, evaluating 2,337 models in total. Finally, we open source ABLATOR; https://github.com/fostiropoulos/ablator
["Machine Learning Systems", "Ablation Experiments", "Experiment Design"]
ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15
aCXLRew22D
eBLV3i7PG1c
automl.cc/AutoML/2023/ABCD_Track
2023
ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments
["Iordanis Fostiropoulos", "Laurent Itti"]
Understanding the efficacy of a method requires ablation experiments. Current Machine Learning (ML) workflows emphasize the vertical scaling of large models with paradigms such as ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methods for horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensive when different tools are used for different experiment stages, such as for hyper-parameter optimization, distributed execution, or the consolidation of artifacts. We identify that errors in earlier stages of experimentation propagate to the analysis. Based on our observations, experimental results, and the current literature, we provide recommendations on best practices to prevent errors. To reduce the effort required to perform an accurate analysis and address common errors when scaling the execution of multiple experiments, we introduce ABLATOR. Our framework uses a stateful experiment design paradigm that provides experiment persistence and is robust to errors. Our actionable analysis artifacts are automatically produced by the experiment state and reduce the time to evaluate a hypothesis. We evaluate ABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effect of 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and 4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largest ablation experiment for tabular data on Transformer models to date, evaluating 2,337 models in total. Finally, we open source ABLATOR; https://github.com/fostiropoulos/ablator
["Machine Learning Systems", "Ablation Experiments", "Experiment Design"]
ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15
Dfq_zzQx6x
eBLV3i7PG1c
automl.cc/AutoML/2023/ABCD_Track
2023
ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments
["Iordanis Fostiropoulos", "Laurent Itti"]
Understanding the efficacy of a method requires ablation experiments. Current Machine Learning (ML) workflows emphasize the vertical scaling of large models with paradigms such as ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methods for horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensive when different tools are used for different experiment stages, such as for hyper-parameter optimization, distributed execution, or the consolidation of artifacts. We identify that errors in earlier stages of experimentation propagate to the analysis. Based on our observations, experimental results, and the current literature, we provide recommendations on best practices to prevent errors. To reduce the effort required to perform an accurate analysis and address common errors when scaling the execution of multiple experiments, we introduce ABLATOR. Our framework uses a stateful experiment design paradigm that provides experiment persistence and is robust to errors. Our actionable analysis artifacts are automatically produced by the experiment state and reduce the time to evaluate a hypothesis. We evaluate ABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effect of 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and 4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largest ablation experiment for tabular data on Transformer models to date, evaluating 2,337 models in total. Finally, we open source ABLATOR; https://github.com/fostiropoulos/ablator
["Machine Learning Systems", "Ablation Experiments", "Experiment Design"]
ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15
_PIqMqM9xW
eBLV3i7PG1c
automl.cc/AutoML/2023/ABCD_Track
2023
ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments
["Iordanis Fostiropoulos", "Laurent Itti"]
Understanding the efficacy of a method requires ablation experiments. Current Machine Learning (ML) workflows emphasize the vertical scaling of large models with paradigms such as ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methods for horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensive when different tools are used for different experiment stages, such as for hyper-parameter optimization, distributed execution, or the consolidation of artifacts. We identify that errors in earlier stages of experimentation propagate to the analysis. Based on our observations, experimental results, and the current literature, we provide recommendations on best practices to prevent errors. To reduce the effort required to perform an accurate analysis and address common errors when scaling the execution of multiple experiments, we introduce ABLATOR. Our framework uses a stateful experiment design paradigm that provides experiment persistence and is robust to errors. Our actionable analysis artifacts are automatically produced by the experiment state and reduce the time to evaluate a hypothesis. We evaluate ABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effect of 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and 4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largest ablation experiment for tabular data on Transformer models to date, evaluating 2,337 models in total. Finally, we open source ABLATOR; https://github.com/fostiropoulos/ablator
["Machine Learning Systems", "Ablation Experiments", "Experiment Design"]
ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15
0XuaJd5JCxV
eBLV3i7PG1c
automl.cc/AutoML/2023/ABCD_Track
2023
ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments
["Iordanis Fostiropoulos", "Laurent Itti"]
Understanding the efficacy of a method requires ablation experiments. Current Machine Learning (ML) workflows emphasize the vertical scaling of large models with paradigms such as ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methods for horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensive when different tools are used for different experiment stages, such as for hyper-parameter optimization, distributed execution, or the consolidation of artifacts. We identify that errors in earlier stages of experimentation propagate to the analysis. Based on our observations, experimental results, and the current literature, we provide recommendations on best practices to prevent errors. To reduce the effort required to perform an accurate analysis and address common errors when scaling the execution of multiple experiments, we introduce ABLATOR. Our framework uses a stateful experiment design paradigm that provides experiment persistence and is robust to errors. Our actionable analysis artifacts are automatically produced by the experiment state and reduce the time to evaluate a hypothesis. We evaluate ABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effect of 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and 4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largest ablation experiment for tabular data on Transformer models to date, evaluating 2,337 models in total. Finally, we open source ABLATOR; https://github.com/fostiropoulos/ablator
["Machine Learning Systems", "Ablation Experiments", "Experiment Design"]
ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15
xaZI_Y1QWv
eBLV3i7PG1c
automl.cc/AutoML/2023/ABCD_Track
2023
ABLATOR: Robust Horizontal-Scaling of Machine Learning Ablation Experiments
["Iordanis Fostiropoulos", "Laurent Itti"]
Understanding the efficacy of a method requires ablation experiments. Current Machine Learning (ML) workflows emphasize the vertical scaling of large models with paradigms such as ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methods for horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensive when different tools are used for different experiment stages, such as for hyper-parameter optimization, distributed execution, or the consolidation of artifacts. We identify that errors in earlier stages of experimentation propagate to the analysis. Based on our observations, experimental results, and the current literature, we provide recommendations on best practices to prevent errors. To reduce the effort required to perform an accurate analysis and address common errors when scaling the execution of multiple experiments, we introduce ABLATOR. Our framework uses a stateful experiment design paradigm that provides experiment persistence and is robust to errors. Our actionable analysis artifacts are automatically produced by the experiment state and reduce the time to evaluate a hypothesis. We evaluate ABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effect of 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and 4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largest ablation experiment for tabular data on Transformer models to date, evaluating 2,337 models in total. Finally, we open source ABLATOR; https://github.com/fostiropoulos/ablator
["Machine Learning Systems", "Ablation Experiments", "Experiment Design"]
ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15
i1csF0cazL
uN70Dum6pC2
automl.cc/AutoML/2023/ABCD_Track
2023
MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts
["Diederick Vermetten", "Furong Ye", "Thomas B\u00e4ck", "Carola Doerr"]
Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima. We demonstrate that the MA-BBOB generator can help fill the instance space, while overall patterns in algorithm performance are preserved. By combining the landscape features of the problems with the performance data, we pose the question of whether these features are as useful for algorithm selection as previous studies have implied. MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standardized experimentation routines, provides access to the interactive IOHanalyzer module for performance analysis and visualization, and enables comparisons with the rich and growing data collection available for the (MA-)BBOB functions.
["Benchmarking", "algorithm selection", "black-box optimization", "numerical optimization", "function generation", "instance space", "exploratory landscape analysis"]
MA-BBOB: Many-Affine Combinations of BBOB Functionsfor Evaluating AutoML Approaches in Noiseless NumericalBlack-Box Optimization ContextsDiederick Vermetten1Furong Ye1Thomas Bäck1Carola Doerr21Leiden Institute for Advanced Computer Science (LIACS), Leiden University, The Netherlands2Sorbonne Université, CNRS, LIP6, Paris, FranceAbstract Extending a recent suggestion to generate new instances for numerical black-box optimiza-tion benchmarking by interpolating pairs of the well-established BBOB functions fromthe COmparing COntinuous Optimizers (COCO) platform, we propose in this work a fur-ther generalization that allows multiple affine combinations of the original instances andarbitrarily chosen locations of the global optima.We demonstrate that the MA-BBOB generator can help fill the instance space, while overallpatterns in algorithm performance are preserved. By combining the landscape features ofthe problems with the performance data, we pose the question of whether these features areas useful for algorithm selection as previous studies have implied.MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standard-ized experimentation routines, provides access to the interactive IOHanalyzer module forperformance analysis and visualization, and enables comparisons with the rich and growingdata collection available for the (MA-)BBOB functions.1 IntroductionDespite a long tradition of developing automated Machine Learning (AutoML) approaches fornumerical black-box optimization contexts [ 3,12,28], empirical evaluations are heavily centeredaround very few benchmark collections. One of the most popular collections is the BBOB suite [ 10]of the COmparing COntinuous Optimizers (COCO) platform [ 9]. The BBOB suite was originallydesigned to help researchers analyze the behavior of black-numerical black-box algorithms indifferent optimization contexts. Over time, however, BBOB has been used for many other purposes,including evaluating AutoML methods, even though the problems were never designed to besuitable for this task.With the increasing popularity of the BBOB benchmarks, wide availability of shared perfor-mance data enabled the application of, e.g., algorithm selection methods [ 12]. To achieve thesealgorithm selectors, a representation of the problem space is required based on which the perfor-mance of different algorithms can be predicted. In the case of BBOB, the most commonly usedrepresentation makes use of Exploratory Landscape Analysis (ELA), which has been shown to beable to accurately distinguish between BBOB problems [20, 27].A key problem of algorithm selection based on BBOB problems lies in the ability to test howwell the results generalize. One approach is to use a leave-one-function-out method [ 23], wherethe selector is trained on 23 functions and tested on the remaining one. This generally leads topoor performance, as each problem has been specifically designed to have different global functionproperties. As such, another common method is to leave out a set of problem instances for testing.This way, the selector is trained on all types of problems. However, this has a high potential tooverfit the particular biases of the BBOB problems, an often overlooked risk.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0To remedy these potential issues, the ability to construct new functions which fill the spacesbetween existing BBOB functions could be critical. If the instance space can be filled with newproblems, these could be used to not only test the generalizability of algorithm selection methods,but also more generally to gain insights into e.g., the relation between the ELA representation of aproblem and the behavior of optimization algorithms.Filling the instance space is a topic of rising interest within the optimization community [ 1,19,22,34]. While some work has been conducted to create problem instances that reflect theproperties of real-world applications or obtain similar characteristics of the existing problems, otherwork is trying to generate diverse instances. For example, symbolic regression and simulationof Gaussian processes have been applied to generate benchmarks reflecting real-world problembehaviours in [ 35] and [ 17,29]. On the other hand, research in generating diverse instances ofcombinatorial optimization has been conducted in [ 4,5,16,19]. Regarding black-box numericaloptimization, approaches based on Genetic Programming (GP) have succeeded in generating novelproblem instances with controllable characteristics defined by their ELA features in [ 21], in whichthe authors used ELA features of BBOB instances as a baseline to regenerate similar instances anddesign diverse instances. However, to obtain problems with desired characteristics, the GP needs tobe executed for each dimension. A recent paper proposed a different perspective on generating newproblem instances for numerical optimization. In their paper, Dietrich and Mersmann propose tocreate new problems through weighted combinations of BBOB problems. By creating these affinecombinations of existing problems, it seems that the ELA features can transition smoothly betweenthe two component functions. Moreover, affine combinations of two BBOB problems were appliedto analyze the behavior of optimization algorithms in [ 32]. The paper’s results demonstrated thatthe algorithms’ performance alters along the weights of two combined problems.In this paper, we extend upon the modified version of the affine BBOB combinations [ 32] bygeneralizing to combinations between any number of BBOB functions. Through doing this, weaddress the concerns regarding the scaling of the component functions and the impact of thelocation of the global optimum. We also propose a modified mechanism to sample weights to avoidpotential biases resulting from including too many problems.From the proposed many-affine problem generation method, we sample 1 000 instances, forwhich we perform both an ELA based analysis as well as an analysis of the performance of a setof algorithms. By combining these results in a simple algorithm selection model, we raise thequestion of whether or not the ELA features are sufficiently representative to create a generalizablealgorithm selection model.In summary, our key contributions and findings are:1.We introduce MA-BBOB, a generator of arbitrary affine combinations of the 24 BBOB functions.We explain the rationales behind the various design choices, which include the location of theoptimum, the scaling used for interpolating the different functions and the way of sampling infunctions from this space. The resulting generator is build on the IOHprofiler platform, whichenables equivalent benchmarking setups to the original BBOB problems.2.We analyze 1 000 randomly sampled instances in 2dand in 5dvia Exploratory Landscape Analysis(ELA [ 20]) and show that the combined MA-BBOB functions cover the space between the original‘pure’ BBOB functions quite well, with the exception of some of problems like the linear slopeand ellipsoid problem, which are essentially only available in the ‘pure’ BBOB functions, butdisappear in the MA-BBOB instances with non-trivial weights.3.We compare the performance of five black-box optimization algorithms on the original BBOB andthe 1 000 randomly sampled MA-BBOB instances and show that the rank distribution changesslightly in favour of the CMA-ES algorithms and to the disadvantage of RCobyla.24.Finally, we also perform per-instance algorithm performance prediction studies on MA-BBOB.The results confirm that the regression accuracy is better when the training set includes gener-alized BBOB functions. However, we also observe a considerable performance gap between ELAbased regression models and those trained with full knowledge of the weights that are usedto construct the test instances. These results indicate that the current set of ELA features failto capture some instance properties that are crucial for algorithm performance, a shortcomingthat we expect to motivate future research on the design of features for numerical black-boxoptimization.2 BackgroundThe BBOB Problem Suite. The BBOB collection [ 10] is one of the main components of the COCOframework [ 9]. It is heavily used in the black-box optimization community for evaluating derivative-free numerical optimization techniques. On the original BBOB suite of 24 single-objective, noiselessoptimization problems [10], hundreds of different optimization algorithms have been tested [2].One key reason for the popularity of this suite is the ability to create independent instancesof the same problem, which are generated by applying transformations in the domain and theobjective space. These transformations include rotation, scaling of objective value and moving thelocation of the global optimum. They allow researchers to evaluate possible bias in their algorithms,and are hence an important component of algorithm benchmarking.The availability of many instances are also a key enabler for the evaluation of AutoML ap-proaches in black-box optimization contexts. Since not all instances are easily accessible via theoriginal COCO implementation, we have made direct access to the instances available in ourIOHprofiler benchmarking environment [7, 33].Affine Function Combinations. While the availability of numerous instances per each BBOBfunction facilitates AutoML studies, it has been observed that the generalization ability of modelstrained on BBOB and tested on independent problems is disappointing [ 13,15]. This motivated thedesign of new problems to extend the existing BBOB suite. One such approach was proposed in [ 8].It suggests to consider affine combinations of two different problem instances [ 8]. The resultingproblems were analyzed with respect to their fitness landscapes, as seen via exploratory landscapeanalysis (ELA [ 20]). They have been shown to smoothly connect their component functions in areduced-dimensionality ELA space. This seems to imply that we can use these problems to connectany pair of existing problems, which would significantly add to the instance space.In our follow-up study [ 32] we recently proposed a modified version of creating these affinefunction combinations, see Sec. 3.1 for details. We used these functions to compare the performanceof five selected black-box optimization algorithms and showed that the behavior differences arenot as smooth as the differences in ELA space. In several cases, combinations of two functions arebest solved by a different algorithm than the one which solved the component problems.3 The MA-BBOB Benchmark Suite3.1 Scaling of Function ValuesWhen combining multiple functions to create a new benchmark problem, one key factor whichimpacts the landscape is the scaling of the combined functions. Since we are interested in takingaffine combinations of existing functions, a difference in scale might lead one function to dominateall others, leading to limited coverage of the feature space.The original affine BBOB functions proposed in [ 8] make use of a tuning procedure for findinguseable weights. While this allows for selecting suitable problems, it makes it more challengingto just randomly sample a set of new problems. We therefore suggested an alternative way togenerate the affine combinations in [ 32]. This change is two-fold: each component problem fisfirst transformed by subtracting the global optimum value min f. This way, we know that each3−4 −2 0 2 4mean−4−2024−4 −2 0 2 4max−4 −2 0 2 4minmax−4 −2 0 2 4min−4 −2 0 2 4equal−1.2−0.60.00.61.21.82.43.03.6Figure 1: Log-scaled fitness values of an example of a single many-affine function with 5 different waysof scaling. The first 4 are taking the mean, max, (max+min)/2and min of 50 000 randomsamples to create the scale factor, while the ’equal’ option does not make use of this scaling.Function ID 1 2 3 4 5 6 7 8 9 10 11 12Scale Factor 11.0 17.5 12.3 12.6 11.5 15.3 12.1 15.3 15.2 17.4 13.4 20.4Function ID 13 14 15 16 17 18 19 20 21 22 23 24Scale Factor 12.9 10.4 12.3 10.3 9.8 10.6 10.0 14.7 10.7 10.8 9.0 12.1Table 1: Final scale factors used to generate MA-BBOB problems.component functions optimum function value is set to 0. Then, instead of arithmetic weighting, alogarithmic combination is used to limit the impact of scale differences. While this simplifies theprocedure of generating random function combinations, BBOB functions can sometimes differ bymultiple orders of magnitude, which still produces some bias in this procedure.To address this shortcoming in MA-BBOB, we have investigated different scaling procedures. Westill scale the global optima and perform a logarithmic transform, but we now add a normalizationstep. This transforms the log-precision values into an approximation of [0,1], and then mapsthis back to the commonly used BBOB domain [10−8,102]. This is achieved by taking the log-transformed precision (capped at −8), adding 8so the minimum is at 0and dividing by a scalefactor . The aim of this procedure is to make sure that the target precision of 102is similarly easy toachieve on all problems.In order to select appropriate scale factors, we need to determine practical limits of the functionvalue for each BBOB function. We do this by considering a set of 50 000 random samples andaggregating the corresponding function values. We consider the following aggregation methods(based on the log-scaled precision): min,mean ,max,(max+min)/2. Fig. 1 illustrates the differencesbetween these methods, for a 2dproblem. Note that because we use log-scaled precision, thedifferences between instances are rather small, so we opted to only do the sampling for one instanceof each BBOB problem. Based on visual interpretation of the contour plots in Fig. 1, we (somewhatsubjectively) select the (max+min)/2scaling as the most promising method.To avoid having to constantly repeat this random sampling procedure, we also investigate theway in which the scales of the random factors, and thus the scale factors, differ across dimensions.The results are shown in Fig. 2. With exception of the smallest dimensions, the values remain quitestable. As such, we decide to implement them as hard-coded values based on the median of theshown values, rounded to the nearest decimal. The resulting factors are shown in Tab. 1.3.2 Instance CreationA second aspect to consider when combining multiple functions is the placement of the globaloptimum. In the previous two papers [ 8,32] on affine BBOB functions, this was done based425 10 15 20 25 30 35 40Dimension05101520(Log(max)+Log(min))/2Figure 2: Evolution of the log-scaled (max+min)/2scaling factor, rel-ative to the problem dimension. The values are based on50 000 samples. Each line corresponds to one of the 24 BBOBfunctions.−5−4 −2 0 2 45−5−4−20245Figure 3: Location of optima ofthe 24 2d BBOB func-tions. The red linesmark the commonlyused box-constraintsof[−5,5]D.−4−2 024−4−2024−4−2 024−4−2 024−4−2 024−4−2 024−1.6−1.2−0.8−0.40.00.40.81.2Figure 4: Log-scaled fitness values of an example of a single many-affine function with changedlocation of optimum.on the instance of one of the two component functions. However, the original BBOB instancecreation process can be considered somewhat biased, as not all functions make use of the sametransformations [ 10,18]. As such, if we extend the process of using the optimum of one of theused component functions, the optimum would be distributed as in Fig. 3. To avoid this issue,we decided to generate the optimum location separately, uniformly at random in the full domain[−5,5]d. Fig. 4 shows how a 2d-function changes when moving the optimum location.3.3 Sampling random functionsAs a final factor impacting the types of problems generated, we consider the way in which weightsare sampled. While this can indeed be done uniformly at random (with a normalization afterwards),this might not lead to the most useful set of benchmark problems. When the weights for eachfunction are generated this way, the probability of having a weight of 0for any component is 0.This means that every function will contribute to some extent to the newly generated problem. Assuch, it would be almost impossible for this procedure to result in a unimodal problem.One way to address this bias in function generation is to adapt how many functions are part ofthe newly created problem. Indeed, the combinations of two problems already lead to a vast spaceof interesting landscapes. We opt for a different approach: we make use of a threshold value whichdetermines which functions contribute to the problem. The procedure for generating weights is5−4−2 024T=0−4−2024−4−2 024T=0.4−4−2 024T=0.55−4−2 024T=0.7−4−2 024T=0.85−1.5−1.0−0.50.00.51.01.52.02.5Figure 5: Log-scaled fitness values of an example of a ’single’ many-affine function with 5 differentsampling thresholds.thus as follows: (1) Generate initial weight uniformly at random, (2) adapt the threshold to be theminimum of the selected value and the third-highest weight, (3) this threshold is subtracted fromthe weights, all negative values are set to 0. The second step is to ensure that at least two problemsalways contribute to the new problem. Fig. 5 provides an example of a problem generated withdifferent threshold values. We decide to set the default value at T=0.85, such that on average 3.6problems will have a non-zero weight.4 Experimental SetupIn the remainder of this paper, we will make use of 1 000 functions, with weights sampled accordingto Sec. 3.3 with T=0.85. Each problem uses instances uniformly selected between 1 and 100 foreach of the component functions, and uniformly sampled locations of the global optimum. We usethe same set of weights, instances and optima locations in both 5and2dimensions.Comparing this set of generated problems with the pure BBOB functions is a key aspect of thiswork. To remove biases in terms of scaling, we apply the same scale factors to the BBOB functions.Practically, this means we use the all-zero weights with a 1 for the selected function to collectthe BBOB data (with the location of the optima set as original). We use 5instances of each BBOBfunction for our comparisons. We refer to these ‘pure’ BBOB functions as ‘BBOB’, while we referto the MA-BBOB instances as ‘affine’.Reproducibility: The code used during this project, as well as all resulting data, is availableat [31]. The repository also contains additional versions of the figures which could not be includedhere because of the page limit. We are actively working towards a data repository for MA-BBOBperformance data which will also allow automated annotation via the OPTION ontology [14], forFAIR data sharing [11].5 Landscape AnalysisTo analyze the landscapes of the created affine problems, we make use of the pflacco package [ 24]to compute ELA features. We use 5sets of 1 000 dpoints from a scrambled Sobol’ sequence. Wethen evaluate these points and follow the advice of [25] and use min-max normalization on thesefunction values. We finally remove all features which are constant across all problems or containNAN values, resulting in a total of 44remaining features. For each of these features, we then takethe mean value among the 5samples.To gain insight into the differences between the BBOB and affine functions, we reduce theoriginal 44dimensional space into 2d. To achieve this, we make use of the Uniform ManifoldApproximation Projection (UMAP). To focus on the parts of the instance space covered by thenewly generated problems, we create the mapping based only on the BBOB problems. The result ofapplying this mapping to all 2dproblems is visualized in Fig. 6b.6−2 0 2 4 6 8 10 12x0−10123456x1W60.00.20.40.60.81.0kindAffineBBOB(a)Points are colored according to the weights usedfor BBOB function F7.4 6 8 10 12 14 16x002468x1kindAffineBBOB(b)Points are colored according to the functiontype: BBOB of affine combination.Figure 6: UMAP-reduction of the 24 BBOB functions (5 instances each) and 1000 affine combinationsfor5d(a) and 2d(b). The projection is created based on the BBOB only.ela_meta.lin_simple.adj_r2ela_meta.lin_simple.interceptela_meta.lin_simple.coef.minela_meta.lin_simple.coef.maxela_meta.lin_simple.coef.max_by_minela_meta.lin_w_interact.adj_r2ela_meta.quad_simple.adj_r2ela_meta.quad_simple.condela_meta.quad_w_interact.adj_r2ela_distr.skewnessela_distr.kurtosisela_distr.number_of_peaksela_level.mmce_lda_10ela_level.mmce_qda_10ela_level.lda_qda_10ela_level.mmce_lda_25ela_level.mmce_qda_25ela_level.mmce_lda_50ela_level.mmce_qda_50ela_level.lda_qda_50nbc.nn_nb.sd_rationbc.nn_nb.mean_rationbc.nn_nb.cornbc.dist_ratio.coeff_varnbc.nb_fitness.cordisp.ratio_mean_02disp.ratio_mean_05disp.ratio_mean_10disp.ratio_mean_25disp.ratio_median_02disp.ratio_median_05disp.ratio_median_10disp.ratio_median_25disp.diff_mean_02disp.diff_mean_05disp.diff_mean_10disp.diff_mean_25disp.diff_median_02disp.diff_median_05disp.diff_median_10disp.diff_median_25ic.h_maxic.eps_sic.m00.00.20.40.60.81.0AffineBBOBFigure 7: Distribution of (normalized) ELA feature values on the 5dversion of the problems.From Fig. 6b, we observe that many of the affine problems are clustered together. While someregions between existing BBOB problems are filled, it seems that the function generation process isnot able to find solutions close to every BBOB problem. This might be caused by the fact that bycombining an average of 3.6functions, it is highly unlikely that we find functions similar to e.g., alinear slope or a function with low global structure.In addition to the dimensionality reduction, we can also investigate the distributions of indi-vidual ELA features. By comparing the distributions on the BBOB functions with the ones on theaffine problems, we can gain some insight into the most common types of problems generated. InFig. 7, we show these distributions for the min-max normalized ELA features. From this figure, wecan see that for many features, the affine problems are much more clustered than the BBOB ones,which are distributed more uniformly over the space of feature values.6 Algorithm PerformanceWhile the ELA based analysis gives us some insight into the low-level characteristics of the generatedproblems, it does not directly give insight into the power of these problems to differentiate betweenalgorithms. As such, we also run a set of 5 different algorithms on each problem instance. Thealgorithms we consider are: (1) Diagonal CMA-ES from the Nevergrad platform [ 26] (dCMA), (2)RCobyla from the Nevergrad platform [ 26] (Cobyla), (3) Differential Evolution from the Nevergradplatform [ 26] (DE), (4) CMA-ES from the modular CMA-ES package [ 6] (modCMA), and (5) L-SHADE, implemented using the modular DE package [30] (modDE).For each of these algorithms, we perform 50independent runs on each of the 1 000 affinefunctions as well as the 5instances from each of the 24BBOB problems. It is important to note that71 2 3 4 5Rank (BBOB)0.00.10.20.30.40.50.60.70.80.91.0Proportion1 2 3 4 5Rank (Affine)IDDiagonalCMADifferentialEvolutionRCobylamodcmamodde(a)Distribution of ranks based on per-functionAUC after 10 000 evaluations.0 2 4 6 8 10 12x0−101234567x1bestDiagonalCMAmodcmamoddeRCobylakindAffineBBOB(b)UMAP-reduction of BBOB functions (5 in-stances) and 1000 affine combinations. Projec-tion created based on BBOB only. Color basedon the algorithm with the largest AUC.Figure 8: Results of ranking the 5 algorithms on the 5dproblems, based on AUC after 10 000 evaluations.the BBOB functions make use of the same scale factors as used to generate the affine functions inorder to further reduce the impact of scale differences. These experiments are performed on boththe2dand5dversions of these problems.To analyze the differences in algorithm performance between the two sets of problems, weconsider the normalized area under the curve (AUC) of the empirical cumulative distributionfunction (ECDF) as the performance metric. For the ECDF, we use a set of 51 logarithmicallyspaced targets from 10−8to102. Based on the AUC values, we then rank the set of 5 algorithmson each problem. The distribution of these ranks is shown in Fig. 8a. We observe that the overallpatterns between the BBOB and affine problems are preserved. There are some notable differences,particularly with regard to the performance of Cobyla. While this algorithm often performspoorly on BBOB, for the affine problems it is ranked worst in a majority of cases. This suggeststhat problems where this algorithm performs well (mostly unimodal problems) are not as well-represented in the MA-BBOB functions.In addition to this ranking, we can also link the ELA features to the algorithm performance. Toexplore whether the used features might correlate with the problem’s difficulty from the algorithm’sperspective, we link the dimensionality reduction with the best algorithm from the portfolio. Thisis visualized for the 5dproblems in Fig. 8b.7 Algorithm SelectionAs a final experiment, we now use the generated problems in an algorithm selection context. Foreach of the 5 algorithms, we train a random forest regression model to predict the AUC on eachproblem. The input variables for this model are either the ELA features, as is commonly done, orthe weights used to generate the functions. By contrasting these approaches, we obtain an intuitionfor how well the ELA features capture the algorithm-relevant properties of the function.While we can train our models in a common cross-validation manner, we can also use thesame setup to test the generalizability of models trained on the original BBOB problems only. Theresulting mean absolute errors MAE of these models are plotted in Fig. 9a.We observe that the ELA representation is often worse than the weights-based one. Thissuggests that the used ELA features might not be sufficient to achieve generalization of an ASmodel. This is especially clear for the generalizability scenario, where we would have expectedELA to perform better. This poor performance seems to suggest that the ELA features might notfully capture all instance properties that determine the behavior of the algorithms.8dCMA DEmodCMA modDE Cobyla dCMA DEmodCMA modDE CobylaELAWeightsELAWeights0.000.050.100.150.200.250.30(a)Mean average error obtained when predictingthe AUC of each of the 5 algorithms based oneither the ELA features or the used weights.Top: model trained on mixture of BBOB andAffine functions using 10-fold cross-validation.Bottom: model trained on BBOB only and pre-dicting performance on affine problems. Left:2dproblems, right: 5dproblems.0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8loss0.40.50.60.70.80.91.0Proportion('ela', 'generalize')('ela', 'cv')('weights', 'generalize')('weights', 'cv')(b)Cumulative distribution of loss (AUC) of therandom forest models predicting the bestalgorithm ( 2dand 5dproblems combined),based on either the ELA features or weights-representation of the problems.Figure 9: Performance of the random forest model predicting algorithm performance (a) or the bestalgorithm for each problem (b).When training a very basic AS model (predicting the best algorithm) in the same manner(training on BBOB and evaluating on Affine), we achieve similar performance differences assuggested by Fig. 9a: the weighted F1-score based on ELA is 0.67, while the score based on weightsis0.70. The corresponding loss in terms of AUC values is plotted in Fig. 9b. This figure confirmsthe previous observation that the ELA features are not sufficiently representative to accuratelyrepresent the problems in a way which is relevant for ranking optimization algorithms.8 Conclusions and Future WorkThe proposed procedure for generating new problems as an affine combination of the 24 BBOBproblems can serve as a function generator to help fill the instance space spanned by the BBOBfunctions. By applying a scaling step before combining the problems, we make sure that theresulting problems all have an equivalent range of objective values, regardless of the used weights.In addition, the uniform location of the global optima in the full domain avoids some of the biasof the BBOB problems. By analyzing the ELA features of 1 000 of these many-affine MA-BBOBproblems, we observed that they do indeed fill a part of the instance space. There are still someinherent limitations arising from the fact that the building blocks are fixed. For example, it isimpossible to generate a problem similar to the linear slope. Similarly, it is highly unlikely that newproblems have specific properties such as low global structure. Nevertheless, the overall abilityranking of optimization algorithms on these problems remains similar to the ranking on the BBOBproblems, suggesting that the algorithmic challenges might be similar.The results presented above had as primary focus a first analysis of the generated MA-BBOBinstances, and how they compare to the BBOB functions. For this purpose, we have consideredrandomly sampled instances. The selection of ‘representative’ instance collections still remains tobe done. Another important step for future work is to test the generalization ability of AutoMLsystems that are trained on MA-BBOB functions and tested on numerical black-box optimizationproblems that do not originate from the BBOB family. In this context, our basic Random Forest-basedalgorithm selector indicates that the ELA features might not be as suitable for this generalizationtask as expected, motivating further research on feature engineering for black-box optimization.99 Broader Impact StatementAfter careful reflection, the authors have determined that this work presents no notable negativeimpacts to society or the environment.10 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes](b) Did you describe the limitations of your work? [Yes](c) Did you discuss any potential negative societal impacts of your work? [N/A](d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes]2. If you are including theoretical results. . .(a) Did you state the full set of assumptions of all theoretical results? [N/A](b) Did you include complete proofs of all theoretical results? [N/A]3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes](b)Did you include the raw results of running the given instructions on the given code anddata? [Yes](c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes](d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes](e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes](f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [N/A](g)Did you run ablation studies to assess the impact of different components of your approach?[N/A](h) Did you use the same evaluation protocol for the methods being compared? [Yes](i) Did you compare performance over time? [Yes](j) Did you perform multiple runs of your experiments and report random seeds? [Yes](k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [No] We aggregate data into AUC instead of reporting error bars onfixed-budget or fixed-target results.10(l) Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [No] We did not record the computation timeneeded while running experiments.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A]4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [N/A](b) Did you mention the license of the assets? [N/A](c) Did you include any new assets either in the supplemental material or as a url? [N/A](d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]Acknowledgements. Our work is financially supported by ANR-22-ERCS-0003-01 project VARIA-TION, by the CNRS INS2I project IOHprofiler, and by the NWO DACCOMPLI project (628.011.002).References[1]Hossein Alipour, Mario Andrés Muñoz, and Kate Smith-Miles. 2023. Enhanced instancespace analysis for the maximum flow problem. Eur. J. Oper. Res. 304, 2 (2023), 411–428.https://doi.org/10.1016/j.ejor.2022.04.012[2]Anne Auger and Nikolaus Hansen. 2020. A SIGEVO Impact Award for a Paper Arising fromthe COCO Platform: A Summary and Beyond. https://evolution.sigevo.org/issues/HTML/sigevolution-13-4/home.html . Issue 3.[3]Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance al-gorithm configuration of CMA-ES with limited budget. In Proc. of Genetic and EvolutionaryComputation (GECCO’17) . ACM, 681–688. https://doi.org/10.1145/3071178.3071343[4]Jakob Bossek, Pascal Kerschke, Aneta Neumann, Markus Wagner, Frank Neumann, and HeikeTrautmann. 2019. Evolving diverse TSP instances by means of novel and creative mutationoperators. In Proc. of Conference on Foundations of Genetic Algorithms (FOGA’19) , TobiasFriedrich, Carola Doerr, and Dirk V. Arnold (Eds.). ACM, 58–71. https://doi.org/10.1145/3299904.334030711[5]Jakob Bossek and Markus Wagner. 2021. Generating instances with performance differencesfor more than just two algorithms. In Proc. of Genetic and Evolutionary Computation Conference(GECCO’21, Companion material) , Krzysztof Krawiec (Ed.). ACM, 1423–1432. https://doi.org/10.1145/3449726.3463165[6]Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck. 2021.Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmicmodules. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21, Companionmaterial) . ACM, 1375–1384. https://doi.org/10.1145/3449726.3463167[7]Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck.2021. IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics. CoRRabs/2111.04077 (2021). arXiv:2111.04077 https://arxiv.org/abs/2111.04077[8]Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of Benchmark Func-tion Sets Through Affine Recombination. In Proc. of Parallel Problem Solving from Nature(PPSN’22) (LNCS, Vol. 13398) , Günter Rudolph, Anna V. Kononova, Hernán E. Aguirre, PascalKerschke, Gabriela Ochoa, and Tea Tusar (Eds.). Springer, 590–602. https://doi.org/10.1007/978-3-031-14714-2_41[9]Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff.2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optim.Methods Softw. 36, 1 (2021), 114–144.[10] Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. 2009. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions . Technical Report RR-6829.INRIA. https://hal.inria.fr/inria-00362633/document[11] Annika Jacobsen, Ricardo de Miranda Azevedo, Nick S. Juty, Dominique Batista, Simon J.Coles, Ronald Cornet, Mélanie Courtot, Mercè Crosas, Michel Dumontier, Chris T. A. Evelo,Carole A. Goble, Giancarlo Guizzardi, Karsten Kryger Hansen, Ali Hasnain, Kristina M. Hettne,Jaap Heringa, Rob W. W. Hooft, Melanie Imming, Keith G. Jeffery, Rajaram Kaliyaperumal,Martijn G. Kersloot, Christine R. Kirkpatrick, Tobias Kuhn, Ignasi Labastida, Barbara Magagna,Peter McQuilton, Natalie Meyers, Annalisa Montesanti, Mirjam van Reisen, Philippe Rocca-Serra, Robert Pergl, Susanna-Assunta Sansone, Luiz Olavo Bonino da Silva Santos, JulianeSchneider, George O. Strawn, Mark Thompson, Andra Waagmeester, Tobias Weigel, Mark D.Wilkinson, Egon L. Willighagen, Peter Wittenburg, Marco Roos, Barend Mons, and ErikSchultes. 2020. FAIR Principles: Interpretations and Implementation Considerations. DataIntell. 2, 1-2 (2020), 10–29. https://doi.org/10.1162/dint_r_00024[12] Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. AutomatedAlgorithm Selection: Survey and Perspectives. Evol. Comput. 27, 1 (2019), 3–45. https://doi.org/10.1162/evco_a_00242[13] Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov,and Carola Doerr. 2022. Per-run Algorithm Selection with Warm-starting using Trajectory-based Features. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS, Vol. 13398) .Springer, 46–60. https://doi.org/10.1007/978-3-031-14714-2_4 Free version availableathttps://arxiv.org/abs/2204.09483 .[14] Ana Kostovska, Diederick Vermetten, Carola Doerr, Sašo Džeroski, Panče Panov, and TomeEftimov. 2022. OPTION: OPTImization Algorithm Benchmarking ONtology. IEEE Trans. Evol.Comput. (2022). https://doi.org/10.1109/TEVC.2022.3232844 To appear. Free versionavailable at https://arxiv.org/abs/2211.11332 .12[15] Benjamin Lacroix and John McCall. 2019. Limitations of Benchmark Sets and LandscapeFeatures for Algorithm Selection and Performance Prediction. In Proc. of Genetic and Evolution-ary Computation (GECCO’19) (Prague, Czech Republic). ACM, New York, NY, USA, 261–262.https://doi.org/10.1145/3319619.3322051[16] Thibault Lechien, Jorik Jooken, and Patrick De Causmaecker. 2023. Evolving test instancesof the Hamiltonian completion problem. Comput. Oper. Res. 149 (2023), 106019. https://doi.org/10.1016/j.cor.2022.106019[17] Fu Xing Long, Bas van Stein, Moritz Frenzel, Peter Krause, Markus Gitterle, and Thomas Bäck.2022. Learning the characteristics of engineering optimization problems with applications inautomotive crash. In Proc. of Genetic and Evolutionary Computation (GECCO’22) , Jonathan E.Fieldsend and Markus Wagner (Eds.). ACM, 1227–1236. https://doi.org/10.1145/3512290.3528712[18] Fu Xing Long, Diederick Vermetten, Bas van Stein, and Anna V. Kononova. 2022. BBOBInstance Analysis: Landscape Properties and Algorithm Performance across Problem In-stances. CoRR abs/2211.16318 (2022). https://doi.org/10.48550/arXiv.2211.16318arXiv:2211.16318[19] Alejandro Marrero, Eduardo Segredo, Coromoto León, and Emma Hart. 2022. A Novelty-Search Approach to Filling an Instance-Space with Diverse and Discriminatory Instancesfor the Knapsack Problem. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS,Vol. 13398) . Springer, 223–236. https://doi.org/10.1007/978-3-031-14714-2_16[20] Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and GünterRudolph. 2011. Exploratory landscape analysis. In Proc. of Genetic and Evolutionary Computa-tion (GECCO’11) . ACM, 829–836.[21] Mario A. Muñoz and Kate Smith-Miles. 2020. Generating New Space-Filling Test Instances forContinuous Black-Box Optimization. Evol. Comput. 28, 3 (2020), 379–404. https://doi.org/10.1162/evco_a_00262[22] Mario Andrés Muñoz, Tao Yan, Matheus R. Leal, Kate Smith-Miles, Ana Carolina Lorena,Gisele L. Pappa, and Rômulo Madureira Rodrigues. 2021. An Instance Space Analysis ofRegression Problems. ACM Trans. Knowl. Discov. Data 15, 2 (2021), 28:1–28:25. https://doi.org/10.1145/3436893[23] Ana Nikolikj, Carola Doerr, and Tome Eftimov. 2023. RF+ clust for Leave-One-Problem-OutPerformance Prediction. In Proc. of Applications of Evolutionary Computation (Evo Applica-tions’23) . Springer, 285–301.[24] Raphael Patrick Prager. 2022. pFlacco. https://pypi.org/project/pflacco/ .[25] Raphael Patrick Prager and Heike Trautmann. 2023. Nullifying the Inherent Bias of Non-invariant Exploratory Landscape Analysis Features. In Proc. of Applications of EvolutionaryComputation (Evo Applications’23) . Springer, 411–425.[26] Jérémy Rapin and Olivier Teytaud. 2018. Nevergrad - A gradient-free optimization platform.https://GitHub.com/FacebookResearch/Nevergrad .[27] Quentin Renau, Johann Dreo, Carola Doerr, and Benjamin Doerr. 2019. Expressiveness and Ro-bustness of Landscape Features. In Proc. of Genetic and Evolutionary Computation (GECCO’19)(Prague, Czech Republic). ACM, 2048–2051. https://doi.org/10.1145/3319619.332691313[28] Gresa Shala, André Biedenkapp, Noor H. Awad, Steven Adriaensen, Marius Lindauer, andFrank Hutter. 2020. Learning Step-Size Adaptation in CMA-ES. In Proc. of Parallel ProblemSolving from Nature (PPSN’20) (LNCS, Vol. 12269) . Springer, 691–706. https://doi.org/10.1007/978-3-030-58112-1_48[29] Ye Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin.2020. A Recommender System for Metaheuristic Algorithms for Continuous OptimizationBased on Deep Recurrent Neural Networks. IEEE Trans. Artif. Intell. 1, 1 (2020), 5–18. https://doi.org/10.1109/TAI.2020.3022339[30] Diederick Vermetten. 2023. modular Differential Evolution. https://github.com/Dvermetten/ModDE .[31] Diederick Vermetten, Furong Ye, Thomas Bäck, and Carola Doerr. 2023. Reproducibil-ity files and additional figures. Code repository: https://github.com/Dvermetten/Many-affine-BBOB Data and figure repository: https://doi.org/10.5281/zenodo.7826036 .[32] Diederick Vermetten, Furong Ye, and Carola Doerr. 2023. Using Affine Combinations of BBOBProblems for Performance Assessment. CoRR abs/2303.04573 (2023). https://doi.org/10.48550/arXiv.2303.04573 arXiv:2303.04573[33] Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, and Thomas Bäck. 2022. IOH-analyzer: Detailed Performance Analysis for Iterative Optimization Heuristic. ACM Trans.Evol. Learn. Optim. 2, 1 (2022), 3:1–3:29. https://doi.org/10.1145/3510426 IOHanalyzeris available at CRAN, on GitHub, and as web-based GUI, see https://iohprofiler.github.io/IOHanalyzer/ for links.[34] Estefania Yap, Mario Andrés Muñoz, and Kate Smith-Miles. 2022. Informing MultiobjectiveOptimization Benchmark Construction Through Instance Space Analysis. IEEE Trans. Evol.Comput. 26, 6 (2022), 1246–1260. https://doi.org/10.1109/TEVC.2022.3205165[35] Martin Zaefferer and Frederik Rehbach. 2020. Continuous Optimization Benchmarks by Simula-tion. In Proc. of Parallel Problem Solving from Nature (PPSN’20) (LNCS, Vol. 12269) , Thomas Bäck,Mike Preuss, André H. Deutz, Hao Wang, Carola Doerr, Michael T. M. Emmerich, and HeikeTrautmann (Eds.). Springer, 273–286. https://doi.org/10.1007/978-3-030-58112-1_1914
HBeWbaCxPUY
uN70Dum6pC2
automl.cc/AutoML/2023/ABCD_Track
2023
MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts
["Diederick Vermetten", "Furong Ye", "Thomas B\u00e4ck", "Carola Doerr"]
Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima. We demonstrate that the MA-BBOB generator can help fill the instance space, while overall patterns in algorithm performance are preserved. By combining the landscape features of the problems with the performance data, we pose the question of whether these features are as useful for algorithm selection as previous studies have implied. MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standardized experimentation routines, provides access to the interactive IOHanalyzer module for performance analysis and visualization, and enables comparisons with the rich and growing data collection available for the (MA-)BBOB functions.
["Benchmarking", "algorithm selection", "black-box optimization", "numerical optimization", "function generation", "instance space", "exploratory landscape analysis"]
MA-BBOB: Many-Affine Combinations of BBOB Functionsfor Evaluating AutoML Approaches in Noiseless NumericalBlack-Box Optimization ContextsDiederick Vermetten1Furong Ye1Thomas Bäck1Carola Doerr21Leiden Institute for Advanced Computer Science (LIACS), Leiden University, The Netherlands2Sorbonne Université, CNRS, LIP6, Paris, FranceAbstract Extending a recent suggestion to generate new instances for numerical black-box optimiza-tion benchmarking by interpolating pairs of the well-established BBOB functions fromthe COmparing COntinuous Optimizers (COCO) platform, we propose in this work a fur-ther generalization that allows multiple affine combinations of the original instances andarbitrarily chosen locations of the global optima.We demonstrate that the MA-BBOB generator can help fill the instance space, while overallpatterns in algorithm performance are preserved. By combining the landscape features ofthe problems with the performance data, we pose the question of whether these features areas useful for algorithm selection as previous studies have implied.MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standard-ized experimentation routines, provides access to the interactive IOHanalyzer module forperformance analysis and visualization, and enables comparisons with the rich and growingdata collection available for the (MA-)BBOB functions.1 IntroductionDespite a long tradition of developing automated Machine Learning (AutoML) approaches fornumerical black-box optimization contexts [ 3,12,28], empirical evaluations are heavily centeredaround very few benchmark collections. One of the most popular collections is the BBOB suite [ 10]of the COmparing COntinuous Optimizers (COCO) platform [ 9]. The BBOB suite was originallydesigned to help researchers analyze the behavior of black-numerical black-box algorithms indifferent optimization contexts. Over time, however, BBOB has been used for many other purposes,including evaluating AutoML methods, even though the problems were never designed to besuitable for this task.With the increasing popularity of the BBOB benchmarks, wide availability of shared perfor-mance data enabled the application of, e.g., algorithm selection methods [ 12]. To achieve thesealgorithm selectors, a representation of the problem space is required based on which the perfor-mance of different algorithms can be predicted. In the case of BBOB, the most commonly usedrepresentation makes use of Exploratory Landscape Analysis (ELA), which has been shown to beable to accurately distinguish between BBOB problems [20, 27].A key problem of algorithm selection based on BBOB problems lies in the ability to test howwell the results generalize. One approach is to use a leave-one-function-out method [ 23], wherethe selector is trained on 23 functions and tested on the remaining one. This generally leads topoor performance, as each problem has been specifically designed to have different global functionproperties. As such, another common method is to leave out a set of problem instances for testing.This way, the selector is trained on all types of problems. However, this has a high potential tooverfit the particular biases of the BBOB problems, an often overlooked risk.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0To remedy these potential issues, the ability to construct new functions which fill the spacesbetween existing BBOB functions could be critical. If the instance space can be filled with newproblems, these could be used to not only test the generalizability of algorithm selection methods,but also more generally to gain insights into e.g., the relation between the ELA representation of aproblem and the behavior of optimization algorithms.Filling the instance space is a topic of rising interest within the optimization community [ 1,19,22,34]. While some work has been conducted to create problem instances that reflect theproperties of real-world applications or obtain similar characteristics of the existing problems, otherwork is trying to generate diverse instances. For example, symbolic regression and simulationof Gaussian processes have been applied to generate benchmarks reflecting real-world problembehaviours in [ 35] and [ 17,29]. On the other hand, research in generating diverse instances ofcombinatorial optimization has been conducted in [ 4,5,16,19]. Regarding black-box numericaloptimization, approaches based on Genetic Programming (GP) have succeeded in generating novelproblem instances with controllable characteristics defined by their ELA features in [ 21], in whichthe authors used ELA features of BBOB instances as a baseline to regenerate similar instances anddesign diverse instances. However, to obtain problems with desired characteristics, the GP needs tobe executed for each dimension. A recent paper proposed a different perspective on generating newproblem instances for numerical optimization. In their paper, Dietrich and Mersmann propose tocreate new problems through weighted combinations of BBOB problems. By creating these affinecombinations of existing problems, it seems that the ELA features can transition smoothly betweenthe two component functions. Moreover, affine combinations of two BBOB problems were appliedto analyze the behavior of optimization algorithms in [ 32]. The paper’s results demonstrated thatthe algorithms’ performance alters along the weights of two combined problems.In this paper, we extend upon the modified version of the affine BBOB combinations [ 32] bygeneralizing to combinations between any number of BBOB functions. Through doing this, weaddress the concerns regarding the scaling of the component functions and the impact of thelocation of the global optimum. We also propose a modified mechanism to sample weights to avoidpotential biases resulting from including too many problems.From the proposed many-affine problem generation method, we sample 1 000 instances, forwhich we perform both an ELA based analysis as well as an analysis of the performance of a setof algorithms. By combining these results in a simple algorithm selection model, we raise thequestion of whether or not the ELA features are sufficiently representative to create a generalizablealgorithm selection model.In summary, our key contributions and findings are:1.We introduce MA-BBOB, a generator of arbitrary affine combinations of the 24 BBOB functions.We explain the rationales behind the various design choices, which include the location of theoptimum, the scaling used for interpolating the different functions and the way of sampling infunctions from this space. The resulting generator is build on the IOHprofiler platform, whichenables equivalent benchmarking setups to the original BBOB problems.2.We analyze 1 000 randomly sampled instances in 2dand in 5dvia Exploratory Landscape Analysis(ELA [ 20]) and show that the combined MA-BBOB functions cover the space between the original‘pure’ BBOB functions quite well, with the exception of some of problems like the linear slopeand ellipsoid problem, which are essentially only available in the ‘pure’ BBOB functions, butdisappear in the MA-BBOB instances with non-trivial weights.3.We compare the performance of five black-box optimization algorithms on the original BBOB andthe 1 000 randomly sampled MA-BBOB instances and show that the rank distribution changesslightly in favour of the CMA-ES algorithms and to the disadvantage of RCobyla.24.Finally, we also perform per-instance algorithm performance prediction studies on MA-BBOB.The results confirm that the regression accuracy is better when the training set includes gener-alized BBOB functions. However, we also observe a considerable performance gap between ELAbased regression models and those trained with full knowledge of the weights that are usedto construct the test instances. These results indicate that the current set of ELA features failto capture some instance properties that are crucial for algorithm performance, a shortcomingthat we expect to motivate future research on the design of features for numerical black-boxoptimization.2 BackgroundThe BBOB Problem Suite. The BBOB collection [ 10] is one of the main components of the COCOframework [ 9]. It is heavily used in the black-box optimization community for evaluating derivative-free numerical optimization techniques. On the original BBOB suite of 24 single-objective, noiselessoptimization problems [10], hundreds of different optimization algorithms have been tested [2].One key reason for the popularity of this suite is the ability to create independent instancesof the same problem, which are generated by applying transformations in the domain and theobjective space. These transformations include rotation, scaling of objective value and moving thelocation of the global optimum. They allow researchers to evaluate possible bias in their algorithms,and are hence an important component of algorithm benchmarking.The availability of many instances are also a key enabler for the evaluation of AutoML ap-proaches in black-box optimization contexts. Since not all instances are easily accessible via theoriginal COCO implementation, we have made direct access to the instances available in ourIOHprofiler benchmarking environment [7, 33].Affine Function Combinations. While the availability of numerous instances per each BBOBfunction facilitates AutoML studies, it has been observed that the generalization ability of modelstrained on BBOB and tested on independent problems is disappointing [ 13,15]. This motivated thedesign of new problems to extend the existing BBOB suite. One such approach was proposed in [ 8].It suggests to consider affine combinations of two different problem instances [ 8]. The resultingproblems were analyzed with respect to their fitness landscapes, as seen via exploratory landscapeanalysis (ELA [ 20]). They have been shown to smoothly connect their component functions in areduced-dimensionality ELA space. This seems to imply that we can use these problems to connectany pair of existing problems, which would significantly add to the instance space.In our follow-up study [ 32] we recently proposed a modified version of creating these affinefunction combinations, see Sec. 3.1 for details. We used these functions to compare the performanceof five selected black-box optimization algorithms and showed that the behavior differences arenot as smooth as the differences in ELA space. In several cases, combinations of two functions arebest solved by a different algorithm than the one which solved the component problems.3 The MA-BBOB Benchmark Suite3.1 Scaling of Function ValuesWhen combining multiple functions to create a new benchmark problem, one key factor whichimpacts the landscape is the scaling of the combined functions. Since we are interested in takingaffine combinations of existing functions, a difference in scale might lead one function to dominateall others, leading to limited coverage of the feature space.The original affine BBOB functions proposed in [ 8] make use of a tuning procedure for findinguseable weights. While this allows for selecting suitable problems, it makes it more challengingto just randomly sample a set of new problems. We therefore suggested an alternative way togenerate the affine combinations in [ 32]. This change is two-fold: each component problem fisfirst transformed by subtracting the global optimum value min f. This way, we know that each3−4 −2 0 2 4mean−4−2024−4 −2 0 2 4max−4 −2 0 2 4minmax−4 −2 0 2 4min−4 −2 0 2 4equal−1.2−0.60.00.61.21.82.43.03.6Figure 1: Log-scaled fitness values of an example of a single many-affine function with 5 different waysof scaling. The first 4 are taking the mean, max, (max+min)/2and min of 50 000 randomsamples to create the scale factor, while the ’equal’ option does not make use of this scaling.Function ID 1 2 3 4 5 6 7 8 9 10 11 12Scale Factor 11.0 17.5 12.3 12.6 11.5 15.3 12.1 15.3 15.2 17.4 13.4 20.4Function ID 13 14 15 16 17 18 19 20 21 22 23 24Scale Factor 12.9 10.4 12.3 10.3 9.8 10.6 10.0 14.7 10.7 10.8 9.0 12.1Table 1: Final scale factors used to generate MA-BBOB problems.component functions optimum function value is set to 0. Then, instead of arithmetic weighting, alogarithmic combination is used to limit the impact of scale differences. While this simplifies theprocedure of generating random function combinations, BBOB functions can sometimes differ bymultiple orders of magnitude, which still produces some bias in this procedure.To address this shortcoming in MA-BBOB, we have investigated different scaling procedures. Westill scale the global optima and perform a logarithmic transform, but we now add a normalizationstep. This transforms the log-precision values into an approximation of [0,1], and then mapsthis back to the commonly used BBOB domain [10−8,102]. This is achieved by taking the log-transformed precision (capped at −8), adding 8so the minimum is at 0and dividing by a scalefactor . The aim of this procedure is to make sure that the target precision of 102is similarly easy toachieve on all problems.In order to select appropriate scale factors, we need to determine practical limits of the functionvalue for each BBOB function. We do this by considering a set of 50 000 random samples andaggregating the corresponding function values. We consider the following aggregation methods(based on the log-scaled precision): min,mean ,max,(max+min)/2. Fig. 1 illustrates the differencesbetween these methods, for a 2dproblem. Note that because we use log-scaled precision, thedifferences between instances are rather small, so we opted to only do the sampling for one instanceof each BBOB problem. Based on visual interpretation of the contour plots in Fig. 1, we (somewhatsubjectively) select the (max+min)/2scaling as the most promising method.To avoid having to constantly repeat this random sampling procedure, we also investigate theway in which the scales of the random factors, and thus the scale factors, differ across dimensions.The results are shown in Fig. 2. With exception of the smallest dimensions, the values remain quitestable. As such, we decide to implement them as hard-coded values based on the median of theshown values, rounded to the nearest decimal. The resulting factors are shown in Tab. 1.3.2 Instance CreationA second aspect to consider when combining multiple functions is the placement of the globaloptimum. In the previous two papers [ 8,32] on affine BBOB functions, this was done based425 10 15 20 25 30 35 40Dimension05101520(Log(max)+Log(min))/2Figure 2: Evolution of the log-scaled (max+min)/2scaling factor, rel-ative to the problem dimension. The values are based on50 000 samples. Each line corresponds to one of the 24 BBOBfunctions.−5−4 −2 0 2 45−5−4−20245Figure 3: Location of optima ofthe 24 2d BBOB func-tions. The red linesmark the commonlyused box-constraintsof[−5,5]D.−4−2 024−4−2024−4−2 024−4−2 024−4−2 024−4−2 024−1.6−1.2−0.8−0.40.00.40.81.2Figure 4: Log-scaled fitness values of an example of a single many-affine function with changedlocation of optimum.on the instance of one of the two component functions. However, the original BBOB instancecreation process can be considered somewhat biased, as not all functions make use of the sametransformations [ 10,18]. As such, if we extend the process of using the optimum of one of theused component functions, the optimum would be distributed as in Fig. 3. To avoid this issue,we decided to generate the optimum location separately, uniformly at random in the full domain[−5,5]d. Fig. 4 shows how a 2d-function changes when moving the optimum location.3.3 Sampling random functionsAs a final factor impacting the types of problems generated, we consider the way in which weightsare sampled. While this can indeed be done uniformly at random (with a normalization afterwards),this might not lead to the most useful set of benchmark problems. When the weights for eachfunction are generated this way, the probability of having a weight of 0for any component is 0.This means that every function will contribute to some extent to the newly generated problem. Assuch, it would be almost impossible for this procedure to result in a unimodal problem.One way to address this bias in function generation is to adapt how many functions are part ofthe newly created problem. Indeed, the combinations of two problems already lead to a vast spaceof interesting landscapes. We opt for a different approach: we make use of a threshold value whichdetermines which functions contribute to the problem. The procedure for generating weights is5−4−2 024T=0−4−2024−4−2 024T=0.4−4−2 024T=0.55−4−2 024T=0.7−4−2 024T=0.85−1.5−1.0−0.50.00.51.01.52.02.5Figure 5: Log-scaled fitness values of an example of a ’single’ many-affine function with 5 differentsampling thresholds.thus as follows: (1) Generate initial weight uniformly at random, (2) adapt the threshold to be theminimum of the selected value and the third-highest weight, (3) this threshold is subtracted fromthe weights, all negative values are set to 0. The second step is to ensure that at least two problemsalways contribute to the new problem. Fig. 5 provides an example of a problem generated withdifferent threshold values. We decide to set the default value at T=0.85, such that on average 3.6problems will have a non-zero weight.4 Experimental SetupIn the remainder of this paper, we will make use of 1 000 functions, with weights sampled accordingto Sec. 3.3 with T=0.85. Each problem uses instances uniformly selected between 1 and 100 foreach of the component functions, and uniformly sampled locations of the global optimum. We usethe same set of weights, instances and optima locations in both 5and2dimensions.Comparing this set of generated problems with the pure BBOB functions is a key aspect of thiswork. To remove biases in terms of scaling, we apply the same scale factors to the BBOB functions.Practically, this means we use the all-zero weights with a 1 for the selected function to collectthe BBOB data (with the location of the optima set as original). We use 5instances of each BBOBfunction for our comparisons. We refer to these ‘pure’ BBOB functions as ‘BBOB’, while we referto the MA-BBOB instances as ‘affine’.Reproducibility: The code used during this project, as well as all resulting data, is availableat [31]. The repository also contains additional versions of the figures which could not be includedhere because of the page limit. We are actively working towards a data repository for MA-BBOBperformance data which will also allow automated annotation via the OPTION ontology [14], forFAIR data sharing [11].5 Landscape AnalysisTo analyze the landscapes of the created affine problems, we make use of the pflacco package [ 24]to compute ELA features. We use 5sets of 1 000 dpoints from a scrambled Sobol’ sequence. Wethen evaluate these points and follow the advice of [25] and use min-max normalization on thesefunction values. We finally remove all features which are constant across all problems or containNAN values, resulting in a total of 44remaining features. For each of these features, we then takethe mean value among the 5samples.To gain insight into the differences between the BBOB and affine functions, we reduce theoriginal 44dimensional space into 2d. To achieve this, we make use of the Uniform ManifoldApproximation Projection (UMAP). To focus on the parts of the instance space covered by thenewly generated problems, we create the mapping based only on the BBOB problems. The result ofapplying this mapping to all 2dproblems is visualized in Fig. 6b.6−2 0 2 4 6 8 10 12x0−10123456x1W60.00.20.40.60.81.0kindAffineBBOB(a)Points are colored according to the weights usedfor BBOB function F7.4 6 8 10 12 14 16x002468x1kindAffineBBOB(b)Points are colored according to the functiontype: BBOB of affine combination.Figure 6: UMAP-reduction of the 24 BBOB functions (5 instances each) and 1000 affine combinationsfor5d(a) and 2d(b). The projection is created based on the BBOB only.ela_meta.lin_simple.adj_r2ela_meta.lin_simple.interceptela_meta.lin_simple.coef.minela_meta.lin_simple.coef.maxela_meta.lin_simple.coef.max_by_minela_meta.lin_w_interact.adj_r2ela_meta.quad_simple.adj_r2ela_meta.quad_simple.condela_meta.quad_w_interact.adj_r2ela_distr.skewnessela_distr.kurtosisela_distr.number_of_peaksela_level.mmce_lda_10ela_level.mmce_qda_10ela_level.lda_qda_10ela_level.mmce_lda_25ela_level.mmce_qda_25ela_level.mmce_lda_50ela_level.mmce_qda_50ela_level.lda_qda_50nbc.nn_nb.sd_rationbc.nn_nb.mean_rationbc.nn_nb.cornbc.dist_ratio.coeff_varnbc.nb_fitness.cordisp.ratio_mean_02disp.ratio_mean_05disp.ratio_mean_10disp.ratio_mean_25disp.ratio_median_02disp.ratio_median_05disp.ratio_median_10disp.ratio_median_25disp.diff_mean_02disp.diff_mean_05disp.diff_mean_10disp.diff_mean_25disp.diff_median_02disp.diff_median_05disp.diff_median_10disp.diff_median_25ic.h_maxic.eps_sic.m00.00.20.40.60.81.0AffineBBOBFigure 7: Distribution of (normalized) ELA feature values on the 5dversion of the problems.From Fig. 6b, we observe that many of the affine problems are clustered together. While someregions between existing BBOB problems are filled, it seems that the function generation process isnot able to find solutions close to every BBOB problem. This might be caused by the fact that bycombining an average of 3.6functions, it is highly unlikely that we find functions similar to e.g., alinear slope or a function with low global structure.In addition to the dimensionality reduction, we can also investigate the distributions of indi-vidual ELA features. By comparing the distributions on the BBOB functions with the ones on theaffine problems, we can gain some insight into the most common types of problems generated. InFig. 7, we show these distributions for the min-max normalized ELA features. From this figure, wecan see that for many features, the affine problems are much more clustered than the BBOB ones,which are distributed more uniformly over the space of feature values.6 Algorithm PerformanceWhile the ELA based analysis gives us some insight into the low-level characteristics of the generatedproblems, it does not directly give insight into the power of these problems to differentiate betweenalgorithms. As such, we also run a set of 5 different algorithms on each problem instance. Thealgorithms we consider are: (1) Diagonal CMA-ES from the Nevergrad platform [ 26] (dCMA), (2)RCobyla from the Nevergrad platform [ 26] (Cobyla), (3) Differential Evolution from the Nevergradplatform [ 26] (DE), (4) CMA-ES from the modular CMA-ES package [ 6] (modCMA), and (5) L-SHADE, implemented using the modular DE package [30] (modDE).For each of these algorithms, we perform 50independent runs on each of the 1 000 affinefunctions as well as the 5instances from each of the 24BBOB problems. It is important to note that71 2 3 4 5Rank (BBOB)0.00.10.20.30.40.50.60.70.80.91.0Proportion1 2 3 4 5Rank (Affine)IDDiagonalCMADifferentialEvolutionRCobylamodcmamodde(a)Distribution of ranks based on per-functionAUC after 10 000 evaluations.0 2 4 6 8 10 12x0−101234567x1bestDiagonalCMAmodcmamoddeRCobylakindAffineBBOB(b)UMAP-reduction of BBOB functions (5 in-stances) and 1000 affine combinations. Projec-tion created based on BBOB only. Color basedon the algorithm with the largest AUC.Figure 8: Results of ranking the 5 algorithms on the 5dproblems, based on AUC after 10 000 evaluations.the BBOB functions make use of the same scale factors as used to generate the affine functions inorder to further reduce the impact of scale differences. These experiments are performed on boththe2dand5dversions of these problems.To analyze the differences in algorithm performance between the two sets of problems, weconsider the normalized area under the curve (AUC) of the empirical cumulative distributionfunction (ECDF) as the performance metric. For the ECDF, we use a set of 51 logarithmicallyspaced targets from 10−8to102. Based on the AUC values, we then rank the set of 5 algorithmson each problem. The distribution of these ranks is shown in Fig. 8a. We observe that the overallpatterns between the BBOB and affine problems are preserved. There are some notable differences,particularly with regard to the performance of Cobyla. While this algorithm often performspoorly on BBOB, for the affine problems it is ranked worst in a majority of cases. This suggeststhat problems where this algorithm performs well (mostly unimodal problems) are not as well-represented in the MA-BBOB functions.In addition to this ranking, we can also link the ELA features to the algorithm performance. Toexplore whether the used features might correlate with the problem’s difficulty from the algorithm’sperspective, we link the dimensionality reduction with the best algorithm from the portfolio. Thisis visualized for the 5dproblems in Fig. 8b.7 Algorithm SelectionAs a final experiment, we now use the generated problems in an algorithm selection context. Foreach of the 5 algorithms, we train a random forest regression model to predict the AUC on eachproblem. The input variables for this model are either the ELA features, as is commonly done, orthe weights used to generate the functions. By contrasting these approaches, we obtain an intuitionfor how well the ELA features capture the algorithm-relevant properties of the function.While we can train our models in a common cross-validation manner, we can also use thesame setup to test the generalizability of models trained on the original BBOB problems only. Theresulting mean absolute errors MAE of these models are plotted in Fig. 9a.We observe that the ELA representation is often worse than the weights-based one. Thissuggests that the used ELA features might not be sufficient to achieve generalization of an ASmodel. This is especially clear for the generalizability scenario, where we would have expectedELA to perform better. This poor performance seems to suggest that the ELA features might notfully capture all instance properties that determine the behavior of the algorithms.8dCMA DEmodCMA modDE Cobyla dCMA DEmodCMA modDE CobylaELAWeightsELAWeights0.000.050.100.150.200.250.30(a)Mean average error obtained when predictingthe AUC of each of the 5 algorithms based oneither the ELA features or the used weights.Top: model trained on mixture of BBOB andAffine functions using 10-fold cross-validation.Bottom: model trained on BBOB only and pre-dicting performance on affine problems. Left:2dproblems, right: 5dproblems.0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8loss0.40.50.60.70.80.91.0Proportion('ela', 'generalize')('ela', 'cv')('weights', 'generalize')('weights', 'cv')(b)Cumulative distribution of loss (AUC) of therandom forest models predicting the bestalgorithm ( 2dand 5dproblems combined),based on either the ELA features or weights-representation of the problems.Figure 9: Performance of the random forest model predicting algorithm performance (a) or the bestalgorithm for each problem (b).When training a very basic AS model (predicting the best algorithm) in the same manner(training on BBOB and evaluating on Affine), we achieve similar performance differences assuggested by Fig. 9a: the weighted F1-score based on ELA is 0.67, while the score based on weightsis0.70. The corresponding loss in terms of AUC values is plotted in Fig. 9b. This figure confirmsthe previous observation that the ELA features are not sufficiently representative to accuratelyrepresent the problems in a way which is relevant for ranking optimization algorithms.8 Conclusions and Future WorkThe proposed procedure for generating new problems as an affine combination of the 24 BBOBproblems can serve as a function generator to help fill the instance space spanned by the BBOBfunctions. By applying a scaling step before combining the problems, we make sure that theresulting problems all have an equivalent range of objective values, regardless of the used weights.In addition, the uniform location of the global optima in the full domain avoids some of the biasof the BBOB problems. By analyzing the ELA features of 1 000 of these many-affine MA-BBOBproblems, we observed that they do indeed fill a part of the instance space. There are still someinherent limitations arising from the fact that the building blocks are fixed. For example, it isimpossible to generate a problem similar to the linear slope. Similarly, it is highly unlikely that newproblems have specific properties such as low global structure. Nevertheless, the overall abilityranking of optimization algorithms on these problems remains similar to the ranking on the BBOBproblems, suggesting that the algorithmic challenges might be similar.The results presented above had as primary focus a first analysis of the generated MA-BBOBinstances, and how they compare to the BBOB functions. For this purpose, we have consideredrandomly sampled instances. The selection of ‘representative’ instance collections still remains tobe done. Another important step for future work is to test the generalization ability of AutoMLsystems that are trained on MA-BBOB functions and tested on numerical black-box optimizationproblems that do not originate from the BBOB family. In this context, our basic Random Forest-basedalgorithm selector indicates that the ELA features might not be as suitable for this generalizationtask as expected, motivating further research on feature engineering for black-box optimization.99 Broader Impact StatementAfter careful reflection, the authors have determined that this work presents no notable negativeimpacts to society or the environment.10 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes](b) Did you describe the limitations of your work? [Yes](c) Did you discuss any potential negative societal impacts of your work? [N/A](d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes]2. If you are including theoretical results. . .(a) Did you state the full set of assumptions of all theoretical results? [N/A](b) Did you include complete proofs of all theoretical results? [N/A]3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes](b)Did you include the raw results of running the given instructions on the given code anddata? [Yes](c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes](d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes](e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes](f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [N/A](g)Did you run ablation studies to assess the impact of different components of your approach?[N/A](h) Did you use the same evaluation protocol for the methods being compared? [Yes](i) Did you compare performance over time? [Yes](j) Did you perform multiple runs of your experiments and report random seeds? [Yes](k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [No] We aggregate data into AUC instead of reporting error bars onfixed-budget or fixed-target results.10(l) Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [No] We did not record the computation timeneeded while running experiments.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A]4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [N/A](b) Did you mention the license of the assets? [N/A](c) Did you include any new assets either in the supplemental material or as a url? [N/A](d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]Acknowledgements. Our work is financially supported by ANR-22-ERCS-0003-01 project VARIA-TION, by the CNRS INS2I project IOHprofiler, and by the NWO DACCOMPLI project (628.011.002).References[1]Hossein Alipour, Mario Andrés Muñoz, and Kate Smith-Miles. 2023. Enhanced instancespace analysis for the maximum flow problem. Eur. J. Oper. Res. 304, 2 (2023), 411–428.https://doi.org/10.1016/j.ejor.2022.04.012[2]Anne Auger and Nikolaus Hansen. 2020. A SIGEVO Impact Award for a Paper Arising fromthe COCO Platform: A Summary and Beyond. https://evolution.sigevo.org/issues/HTML/sigevolution-13-4/home.html . Issue 3.[3]Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance al-gorithm configuration of CMA-ES with limited budget. In Proc. of Genetic and EvolutionaryComputation (GECCO’17) . ACM, 681–688. https://doi.org/10.1145/3071178.3071343[4]Jakob Bossek, Pascal Kerschke, Aneta Neumann, Markus Wagner, Frank Neumann, and HeikeTrautmann. 2019. Evolving diverse TSP instances by means of novel and creative mutationoperators. In Proc. of Conference on Foundations of Genetic Algorithms (FOGA’19) , TobiasFriedrich, Carola Doerr, and Dirk V. Arnold (Eds.). ACM, 58–71. https://doi.org/10.1145/3299904.334030711[5]Jakob Bossek and Markus Wagner. 2021. Generating instances with performance differencesfor more than just two algorithms. In Proc. of Genetic and Evolutionary Computation Conference(GECCO’21, Companion material) , Krzysztof Krawiec (Ed.). ACM, 1423–1432. https://doi.org/10.1145/3449726.3463165[6]Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck. 2021.Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmicmodules. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21, Companionmaterial) . ACM, 1375–1384. https://doi.org/10.1145/3449726.3463167[7]Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck.2021. IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics. CoRRabs/2111.04077 (2021). arXiv:2111.04077 https://arxiv.org/abs/2111.04077[8]Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of Benchmark Func-tion Sets Through Affine Recombination. In Proc. of Parallel Problem Solving from Nature(PPSN’22) (LNCS, Vol. 13398) , Günter Rudolph, Anna V. Kononova, Hernán E. Aguirre, PascalKerschke, Gabriela Ochoa, and Tea Tusar (Eds.). Springer, 590–602. https://doi.org/10.1007/978-3-031-14714-2_41[9]Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff.2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optim.Methods Softw. 36, 1 (2021), 114–144.[10] Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. 2009. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions . Technical Report RR-6829.INRIA. https://hal.inria.fr/inria-00362633/document[11] Annika Jacobsen, Ricardo de Miranda Azevedo, Nick S. Juty, Dominique Batista, Simon J.Coles, Ronald Cornet, Mélanie Courtot, Mercè Crosas, Michel Dumontier, Chris T. A. Evelo,Carole A. Goble, Giancarlo Guizzardi, Karsten Kryger Hansen, Ali Hasnain, Kristina M. Hettne,Jaap Heringa, Rob W. W. Hooft, Melanie Imming, Keith G. Jeffery, Rajaram Kaliyaperumal,Martijn G. Kersloot, Christine R. Kirkpatrick, Tobias Kuhn, Ignasi Labastida, Barbara Magagna,Peter McQuilton, Natalie Meyers, Annalisa Montesanti, Mirjam van Reisen, Philippe Rocca-Serra, Robert Pergl, Susanna-Assunta Sansone, Luiz Olavo Bonino da Silva Santos, JulianeSchneider, George O. Strawn, Mark Thompson, Andra Waagmeester, Tobias Weigel, Mark D.Wilkinson, Egon L. Willighagen, Peter Wittenburg, Marco Roos, Barend Mons, and ErikSchultes. 2020. FAIR Principles: Interpretations and Implementation Considerations. DataIntell. 2, 1-2 (2020), 10–29. https://doi.org/10.1162/dint_r_00024[12] Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. AutomatedAlgorithm Selection: Survey and Perspectives. Evol. Comput. 27, 1 (2019), 3–45. https://doi.org/10.1162/evco_a_00242[13] Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov,and Carola Doerr. 2022. Per-run Algorithm Selection with Warm-starting using Trajectory-based Features. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS, Vol. 13398) .Springer, 46–60. https://doi.org/10.1007/978-3-031-14714-2_4 Free version availableathttps://arxiv.org/abs/2204.09483 .[14] Ana Kostovska, Diederick Vermetten, Carola Doerr, Sašo Džeroski, Panče Panov, and TomeEftimov. 2022. OPTION: OPTImization Algorithm Benchmarking ONtology. IEEE Trans. Evol.Comput. (2022). https://doi.org/10.1109/TEVC.2022.3232844 To appear. Free versionavailable at https://arxiv.org/abs/2211.11332 .12[15] Benjamin Lacroix and John McCall. 2019. Limitations of Benchmark Sets and LandscapeFeatures for Algorithm Selection and Performance Prediction. In Proc. of Genetic and Evolution-ary Computation (GECCO’19) (Prague, Czech Republic). ACM, New York, NY, USA, 261–262.https://doi.org/10.1145/3319619.3322051[16] Thibault Lechien, Jorik Jooken, and Patrick De Causmaecker. 2023. Evolving test instancesof the Hamiltonian completion problem. Comput. Oper. Res. 149 (2023), 106019. https://doi.org/10.1016/j.cor.2022.106019[17] Fu Xing Long, Bas van Stein, Moritz Frenzel, Peter Krause, Markus Gitterle, and Thomas Bäck.2022. Learning the characteristics of engineering optimization problems with applications inautomotive crash. In Proc. of Genetic and Evolutionary Computation (GECCO’22) , Jonathan E.Fieldsend and Markus Wagner (Eds.). ACM, 1227–1236. https://doi.org/10.1145/3512290.3528712[18] Fu Xing Long, Diederick Vermetten, Bas van Stein, and Anna V. Kononova. 2022. BBOBInstance Analysis: Landscape Properties and Algorithm Performance across Problem In-stances. CoRR abs/2211.16318 (2022). https://doi.org/10.48550/arXiv.2211.16318arXiv:2211.16318[19] Alejandro Marrero, Eduardo Segredo, Coromoto León, and Emma Hart. 2022. A Novelty-Search Approach to Filling an Instance-Space with Diverse and Discriminatory Instancesfor the Knapsack Problem. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS,Vol. 13398) . Springer, 223–236. https://doi.org/10.1007/978-3-031-14714-2_16[20] Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and GünterRudolph. 2011. Exploratory landscape analysis. In Proc. of Genetic and Evolutionary Computa-tion (GECCO’11) . ACM, 829–836.[21] Mario A. Muñoz and Kate Smith-Miles. 2020. Generating New Space-Filling Test Instances forContinuous Black-Box Optimization. Evol. Comput. 28, 3 (2020), 379–404. https://doi.org/10.1162/evco_a_00262[22] Mario Andrés Muñoz, Tao Yan, Matheus R. Leal, Kate Smith-Miles, Ana Carolina Lorena,Gisele L. Pappa, and Rômulo Madureira Rodrigues. 2021. An Instance Space Analysis ofRegression Problems. ACM Trans. Knowl. Discov. Data 15, 2 (2021), 28:1–28:25. https://doi.org/10.1145/3436893[23] Ana Nikolikj, Carola Doerr, and Tome Eftimov. 2023. RF+ clust for Leave-One-Problem-OutPerformance Prediction. In Proc. of Applications of Evolutionary Computation (Evo Applica-tions’23) . Springer, 285–301.[24] Raphael Patrick Prager. 2022. pFlacco. https://pypi.org/project/pflacco/ .[25] Raphael Patrick Prager and Heike Trautmann. 2023. Nullifying the Inherent Bias of Non-invariant Exploratory Landscape Analysis Features. In Proc. of Applications of EvolutionaryComputation (Evo Applications’23) . Springer, 411–425.[26] Jérémy Rapin and Olivier Teytaud. 2018. Nevergrad - A gradient-free optimization platform.https://GitHub.com/FacebookResearch/Nevergrad .[27] Quentin Renau, Johann Dreo, Carola Doerr, and Benjamin Doerr. 2019. Expressiveness and Ro-bustness of Landscape Features. In Proc. of Genetic and Evolutionary Computation (GECCO’19)(Prague, Czech Republic). ACM, 2048–2051. https://doi.org/10.1145/3319619.332691313[28] Gresa Shala, André Biedenkapp, Noor H. Awad, Steven Adriaensen, Marius Lindauer, andFrank Hutter. 2020. Learning Step-Size Adaptation in CMA-ES. In Proc. of Parallel ProblemSolving from Nature (PPSN’20) (LNCS, Vol. 12269) . Springer, 691–706. https://doi.org/10.1007/978-3-030-58112-1_48[29] Ye Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin.2020. A Recommender System for Metaheuristic Algorithms for Continuous OptimizationBased on Deep Recurrent Neural Networks. IEEE Trans. Artif. Intell. 1, 1 (2020), 5–18. https://doi.org/10.1109/TAI.2020.3022339[30] Diederick Vermetten. 2023. modular Differential Evolution. https://github.com/Dvermetten/ModDE .[31] Diederick Vermetten, Furong Ye, Thomas Bäck, and Carola Doerr. 2023. Reproducibil-ity files and additional figures. Code repository: https://github.com/Dvermetten/Many-affine-BBOB Data and figure repository: https://doi.org/10.5281/zenodo.7826036 .[32] Diederick Vermetten, Furong Ye, and Carola Doerr. 2023. Using Affine Combinations of BBOBProblems for Performance Assessment. CoRR abs/2303.04573 (2023). https://doi.org/10.48550/arXiv.2303.04573 arXiv:2303.04573[33] Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, and Thomas Bäck. 2022. IOH-analyzer: Detailed Performance Analysis for Iterative Optimization Heuristic. ACM Trans.Evol. Learn. Optim. 2, 1 (2022), 3:1–3:29. https://doi.org/10.1145/3510426 IOHanalyzeris available at CRAN, on GitHub, and as web-based GUI, see https://iohprofiler.github.io/IOHanalyzer/ for links.[34] Estefania Yap, Mario Andrés Muñoz, and Kate Smith-Miles. 2022. Informing MultiobjectiveOptimization Benchmark Construction Through Instance Space Analysis. IEEE Trans. Evol.Comput. 26, 6 (2022), 1246–1260. https://doi.org/10.1109/TEVC.2022.3205165[35] Martin Zaefferer and Frederik Rehbach. 2020. Continuous Optimization Benchmarks by Simula-tion. In Proc. of Parallel Problem Solving from Nature (PPSN’20) (LNCS, Vol. 12269) , Thomas Bäck,Mike Preuss, André H. Deutz, Hao Wang, Carola Doerr, Michael T. M. Emmerich, and HeikeTrautmann (Eds.). Springer, 273–286. https://doi.org/10.1007/978-3-030-58112-1_1914
NJJdPDRmRm
uN70Dum6pC2
automl.cc/AutoML/2023/ABCD_Track
2023
MA-BBOB: Many-Affine Combinations of BBOB Functions for Evaluating AutoML Approaches in Noiseless Numerical Black-Box Optimization Contexts
["Diederick Vermetten", "Furong Ye", "Thomas B\u00e4ck", "Carola Doerr"]
Extending a recent suggestion to generate new instances for numerical black-box optimization benchmarking by interpolating pairs of the well-established BBOB functions from the COmparing COntinuous Optimizers (COCO) platform, we propose in this work a further generalization that allows multiple affine combinations of the original instances and arbitrarily chosen locations of the global optima. We demonstrate that the MA-BBOB generator can help fill the instance space, while overall patterns in algorithm performance are preserved. By combining the landscape features of the problems with the performance data, we pose the question of whether these features are as useful for algorithm selection as previous studies have implied. MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standardized experimentation routines, provides access to the interactive IOHanalyzer module for performance analysis and visualization, and enables comparisons with the rich and growing data collection available for the (MA-)BBOB functions.
["Benchmarking", "algorithm selection", "black-box optimization", "numerical optimization", "function generation", "instance space", "exploratory landscape analysis"]
MA-BBOB: Many-Affine Combinations of BBOB Functionsfor Evaluating AutoML Approaches in Noiseless NumericalBlack-Box Optimization ContextsDiederick Vermetten1Furong Ye1Thomas Bäck1Carola Doerr21Leiden Institute for Advanced Computer Science (LIACS), Leiden University, The Netherlands2Sorbonne Université, CNRS, LIP6, Paris, FranceAbstract Extending a recent suggestion to generate new instances for numerical black-box optimiza-tion benchmarking by interpolating pairs of the well-established BBOB functions fromthe COmparing COntinuous Optimizers (COCO) platform, we propose in this work a fur-ther generalization that allows multiple affine combinations of the original instances andarbitrarily chosen locations of the global optima.We demonstrate that the MA-BBOB generator can help fill the instance space, while overallpatterns in algorithm performance are preserved. By combining the landscape features ofthe problems with the performance data, we pose the question of whether these features areas useful for algorithm selection as previous studies have implied.MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standard-ized experimentation routines, provides access to the interactive IOHanalyzer module forperformance analysis and visualization, and enables comparisons with the rich and growingdata collection available for the (MA-)BBOB functions.1 IntroductionDespite a long tradition of developing automated Machine Learning (AutoML) approaches fornumerical black-box optimization contexts [ 3,12,28], empirical evaluations are heavily centeredaround very few benchmark collections. One of the most popular collections is the BBOB suite [ 10]of the COmparing COntinuous Optimizers (COCO) platform [ 9]. The BBOB suite was originallydesigned to help researchers analyze the behavior of black-numerical black-box algorithms indifferent optimization contexts. Over time, however, BBOB has been used for many other purposes,including evaluating AutoML methods, even though the problems were never designed to besuitable for this task.With the increasing popularity of the BBOB benchmarks, wide availability of shared perfor-mance data enabled the application of, e.g., algorithm selection methods [ 12]. To achieve thesealgorithm selectors, a representation of the problem space is required based on which the perfor-mance of different algorithms can be predicted. In the case of BBOB, the most commonly usedrepresentation makes use of Exploratory Landscape Analysis (ELA), which has been shown to beable to accurately distinguish between BBOB problems [20, 27].A key problem of algorithm selection based on BBOB problems lies in the ability to test howwell the results generalize. One approach is to use a leave-one-function-out method [ 23], wherethe selector is trained on 23 functions and tested on the remaining one. This generally leads topoor performance, as each problem has been specifically designed to have different global functionproperties. As such, another common method is to leave out a set of problem instances for testing.This way, the selector is trained on all types of problems. However, this has a high potential tooverfit the particular biases of the BBOB problems, an often overlooked risk.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0To remedy these potential issues, the ability to construct new functions which fill the spacesbetween existing BBOB functions could be critical. If the instance space can be filled with newproblems, these could be used to not only test the generalizability of algorithm selection methods,but also more generally to gain insights into e.g., the relation between the ELA representation of aproblem and the behavior of optimization algorithms.Filling the instance space is a topic of rising interest within the optimization community [ 1,19,22,34]. While some work has been conducted to create problem instances that reflect theproperties of real-world applications or obtain similar characteristics of the existing problems, otherwork is trying to generate diverse instances. For example, symbolic regression and simulationof Gaussian processes have been applied to generate benchmarks reflecting real-world problembehaviours in [ 35] and [ 17,29]. On the other hand, research in generating diverse instances ofcombinatorial optimization has been conducted in [ 4,5,16,19]. Regarding black-box numericaloptimization, approaches based on Genetic Programming (GP) have succeeded in generating novelproblem instances with controllable characteristics defined by their ELA features in [ 21], in whichthe authors used ELA features of BBOB instances as a baseline to regenerate similar instances anddesign diverse instances. However, to obtain problems with desired characteristics, the GP needs tobe executed for each dimension. A recent paper proposed a different perspective on generating newproblem instances for numerical optimization. In their paper, Dietrich and Mersmann propose tocreate new problems through weighted combinations of BBOB problems. By creating these affinecombinations of existing problems, it seems that the ELA features can transition smoothly betweenthe two component functions. Moreover, affine combinations of two BBOB problems were appliedto analyze the behavior of optimization algorithms in [ 32]. The paper’s results demonstrated thatthe algorithms’ performance alters along the weights of two combined problems.In this paper, we extend upon the modified version of the affine BBOB combinations [ 32] bygeneralizing to combinations between any number of BBOB functions. Through doing this, weaddress the concerns regarding the scaling of the component functions and the impact of thelocation of the global optimum. We also propose a modified mechanism to sample weights to avoidpotential biases resulting from including too many problems.From the proposed many-affine problem generation method, we sample 1 000 instances, forwhich we perform both an ELA based analysis as well as an analysis of the performance of a setof algorithms. By combining these results in a simple algorithm selection model, we raise thequestion of whether or not the ELA features are sufficiently representative to create a generalizablealgorithm selection model.In summary, our key contributions and findings are:1.We introduce MA-BBOB, a generator of arbitrary affine combinations of the 24 BBOB functions.We explain the rationales behind the various design choices, which include the location of theoptimum, the scaling used for interpolating the different functions and the way of sampling infunctions from this space. The resulting generator is build on the IOHprofiler platform, whichenables equivalent benchmarking setups to the original BBOB problems.2.We analyze 1 000 randomly sampled instances in 2dand in 5dvia Exploratory Landscape Analysis(ELA [ 20]) and show that the combined MA-BBOB functions cover the space between the original‘pure’ BBOB functions quite well, with the exception of some of problems like the linear slopeand ellipsoid problem, which are essentially only available in the ‘pure’ BBOB functions, butdisappear in the MA-BBOB instances with non-trivial weights.3.We compare the performance of five black-box optimization algorithms on the original BBOB andthe 1 000 randomly sampled MA-BBOB instances and show that the rank distribution changesslightly in favour of the CMA-ES algorithms and to the disadvantage of RCobyla.24.Finally, we also perform per-instance algorithm performance prediction studies on MA-BBOB.The results confirm that the regression accuracy is better when the training set includes gener-alized BBOB functions. However, we also observe a considerable performance gap between ELAbased regression models and those trained with full knowledge of the weights that are usedto construct the test instances. These results indicate that the current set of ELA features failto capture some instance properties that are crucial for algorithm performance, a shortcomingthat we expect to motivate future research on the design of features for numerical black-boxoptimization.2 BackgroundThe BBOB Problem Suite. The BBOB collection [ 10] is one of the main components of the COCOframework [ 9]. It is heavily used in the black-box optimization community for evaluating derivative-free numerical optimization techniques. On the original BBOB suite of 24 single-objective, noiselessoptimization problems [10], hundreds of different optimization algorithms have been tested [2].One key reason for the popularity of this suite is the ability to create independent instancesof the same problem, which are generated by applying transformations in the domain and theobjective space. These transformations include rotation, scaling of objective value and moving thelocation of the global optimum. They allow researchers to evaluate possible bias in their algorithms,and are hence an important component of algorithm benchmarking.The availability of many instances are also a key enabler for the evaluation of AutoML ap-proaches in black-box optimization contexts. Since not all instances are easily accessible via theoriginal COCO implementation, we have made direct access to the instances available in ourIOHprofiler benchmarking environment [7, 33].Affine Function Combinations. While the availability of numerous instances per each BBOBfunction facilitates AutoML studies, it has been observed that the generalization ability of modelstrained on BBOB and tested on independent problems is disappointing [ 13,15]. This motivated thedesign of new problems to extend the existing BBOB suite. One such approach was proposed in [ 8].It suggests to consider affine combinations of two different problem instances [ 8]. The resultingproblems were analyzed with respect to their fitness landscapes, as seen via exploratory landscapeanalysis (ELA [ 20]). They have been shown to smoothly connect their component functions in areduced-dimensionality ELA space. This seems to imply that we can use these problems to connectany pair of existing problems, which would significantly add to the instance space.In our follow-up study [ 32] we recently proposed a modified version of creating these affinefunction combinations, see Sec. 3.1 for details. We used these functions to compare the performanceof five selected black-box optimization algorithms and showed that the behavior differences arenot as smooth as the differences in ELA space. In several cases, combinations of two functions arebest solved by a different algorithm than the one which solved the component problems.3 The MA-BBOB Benchmark Suite3.1 Scaling of Function ValuesWhen combining multiple functions to create a new benchmark problem, one key factor whichimpacts the landscape is the scaling of the combined functions. Since we are interested in takingaffine combinations of existing functions, a difference in scale might lead one function to dominateall others, leading to limited coverage of the feature space.The original affine BBOB functions proposed in [ 8] make use of a tuning procedure for findinguseable weights. While this allows for selecting suitable problems, it makes it more challengingto just randomly sample a set of new problems. We therefore suggested an alternative way togenerate the affine combinations in [ 32]. This change is two-fold: each component problem fisfirst transformed by subtracting the global optimum value min f. This way, we know that each3−4 −2 0 2 4mean−4−2024−4 −2 0 2 4max−4 −2 0 2 4minmax−4 −2 0 2 4min−4 −2 0 2 4equal−1.2−0.60.00.61.21.82.43.03.6Figure 1: Log-scaled fitness values of an example of a single many-affine function with 5 different waysof scaling. The first 4 are taking the mean, max, (max+min)/2and min of 50 000 randomsamples to create the scale factor, while the ’equal’ option does not make use of this scaling.Function ID 1 2 3 4 5 6 7 8 9 10 11 12Scale Factor 11.0 17.5 12.3 12.6 11.5 15.3 12.1 15.3 15.2 17.4 13.4 20.4Function ID 13 14 15 16 17 18 19 20 21 22 23 24Scale Factor 12.9 10.4 12.3 10.3 9.8 10.6 10.0 14.7 10.7 10.8 9.0 12.1Table 1: Final scale factors used to generate MA-BBOB problems.component functions optimum function value is set to 0. Then, instead of arithmetic weighting, alogarithmic combination is used to limit the impact of scale differences. While this simplifies theprocedure of generating random function combinations, BBOB functions can sometimes differ bymultiple orders of magnitude, which still produces some bias in this procedure.To address this shortcoming in MA-BBOB, we have investigated different scaling procedures. Westill scale the global optima and perform a logarithmic transform, but we now add a normalizationstep. This transforms the log-precision values into an approximation of [0,1], and then mapsthis back to the commonly used BBOB domain [10−8,102]. This is achieved by taking the log-transformed precision (capped at −8), adding 8so the minimum is at 0and dividing by a scalefactor . The aim of this procedure is to make sure that the target precision of 102is similarly easy toachieve on all problems.In order to select appropriate scale factors, we need to determine practical limits of the functionvalue for each BBOB function. We do this by considering a set of 50 000 random samples andaggregating the corresponding function values. We consider the following aggregation methods(based on the log-scaled precision): min,mean ,max,(max+min)/2. Fig. 1 illustrates the differencesbetween these methods, for a 2dproblem. Note that because we use log-scaled precision, thedifferences between instances are rather small, so we opted to only do the sampling for one instanceof each BBOB problem. Based on visual interpretation of the contour plots in Fig. 1, we (somewhatsubjectively) select the (max+min)/2scaling as the most promising method.To avoid having to constantly repeat this random sampling procedure, we also investigate theway in which the scales of the random factors, and thus the scale factors, differ across dimensions.The results are shown in Fig. 2. With exception of the smallest dimensions, the values remain quitestable. As such, we decide to implement them as hard-coded values based on the median of theshown values, rounded to the nearest decimal. The resulting factors are shown in Tab. 1.3.2 Instance CreationA second aspect to consider when combining multiple functions is the placement of the globaloptimum. In the previous two papers [ 8,32] on affine BBOB functions, this was done based425 10 15 20 25 30 35 40Dimension05101520(Log(max)+Log(min))/2Figure 2: Evolution of the log-scaled (max+min)/2scaling factor, rel-ative to the problem dimension. The values are based on50 000 samples. Each line corresponds to one of the 24 BBOBfunctions.−5−4 −2 0 2 45−5−4−20245Figure 3: Location of optima ofthe 24 2d BBOB func-tions. The red linesmark the commonlyused box-constraintsof[−5,5]D.−4−2 024−4−2024−4−2 024−4−2 024−4−2 024−4−2 024−1.6−1.2−0.8−0.40.00.40.81.2Figure 4: Log-scaled fitness values of an example of a single many-affine function with changedlocation of optimum.on the instance of one of the two component functions. However, the original BBOB instancecreation process can be considered somewhat biased, as not all functions make use of the sametransformations [ 10,18]. As such, if we extend the process of using the optimum of one of theused component functions, the optimum would be distributed as in Fig. 3. To avoid this issue,we decided to generate the optimum location separately, uniformly at random in the full domain[−5,5]d. Fig. 4 shows how a 2d-function changes when moving the optimum location.3.3 Sampling random functionsAs a final factor impacting the types of problems generated, we consider the way in which weightsare sampled. While this can indeed be done uniformly at random (with a normalization afterwards),this might not lead to the most useful set of benchmark problems. When the weights for eachfunction are generated this way, the probability of having a weight of 0for any component is 0.This means that every function will contribute to some extent to the newly generated problem. Assuch, it would be almost impossible for this procedure to result in a unimodal problem.One way to address this bias in function generation is to adapt how many functions are part ofthe newly created problem. Indeed, the combinations of two problems already lead to a vast spaceof interesting landscapes. We opt for a different approach: we make use of a threshold value whichdetermines which functions contribute to the problem. The procedure for generating weights is5−4−2 024T=0−4−2024−4−2 024T=0.4−4−2 024T=0.55−4−2 024T=0.7−4−2 024T=0.85−1.5−1.0−0.50.00.51.01.52.02.5Figure 5: Log-scaled fitness values of an example of a ’single’ many-affine function with 5 differentsampling thresholds.thus as follows: (1) Generate initial weight uniformly at random, (2) adapt the threshold to be theminimum of the selected value and the third-highest weight, (3) this threshold is subtracted fromthe weights, all negative values are set to 0. The second step is to ensure that at least two problemsalways contribute to the new problem. Fig. 5 provides an example of a problem generated withdifferent threshold values. We decide to set the default value at T=0.85, such that on average 3.6problems will have a non-zero weight.4 Experimental SetupIn the remainder of this paper, we will make use of 1 000 functions, with weights sampled accordingto Sec. 3.3 with T=0.85. Each problem uses instances uniformly selected between 1 and 100 foreach of the component functions, and uniformly sampled locations of the global optimum. We usethe same set of weights, instances and optima locations in both 5and2dimensions.Comparing this set of generated problems with the pure BBOB functions is a key aspect of thiswork. To remove biases in terms of scaling, we apply the same scale factors to the BBOB functions.Practically, this means we use the all-zero weights with a 1 for the selected function to collectthe BBOB data (with the location of the optima set as original). We use 5instances of each BBOBfunction for our comparisons. We refer to these ‘pure’ BBOB functions as ‘BBOB’, while we referto the MA-BBOB instances as ‘affine’.Reproducibility: The code used during this project, as well as all resulting data, is availableat [31]. The repository also contains additional versions of the figures which could not be includedhere because of the page limit. We are actively working towards a data repository for MA-BBOBperformance data which will also allow automated annotation via the OPTION ontology [14], forFAIR data sharing [11].5 Landscape AnalysisTo analyze the landscapes of the created affine problems, we make use of the pflacco package [ 24]to compute ELA features. We use 5sets of 1 000 dpoints from a scrambled Sobol’ sequence. Wethen evaluate these points and follow the advice of [25] and use min-max normalization on thesefunction values. We finally remove all features which are constant across all problems or containNAN values, resulting in a total of 44remaining features. For each of these features, we then takethe mean value among the 5samples.To gain insight into the differences between the BBOB and affine functions, we reduce theoriginal 44dimensional space into 2d. To achieve this, we make use of the Uniform ManifoldApproximation Projection (UMAP). To focus on the parts of the instance space covered by thenewly generated problems, we create the mapping based only on the BBOB problems. The result ofapplying this mapping to all 2dproblems is visualized in Fig. 6b.6−2 0 2 4 6 8 10 12x0−10123456x1W60.00.20.40.60.81.0kindAffineBBOB(a)Points are colored according to the weights usedfor BBOB function F7.4 6 8 10 12 14 16x002468x1kindAffineBBOB(b)Points are colored according to the functiontype: BBOB of affine combination.Figure 6: UMAP-reduction of the 24 BBOB functions (5 instances each) and 1000 affine combinationsfor5d(a) and 2d(b). The projection is created based on the BBOB only.ela_meta.lin_simple.adj_r2ela_meta.lin_simple.interceptela_meta.lin_simple.coef.minela_meta.lin_simple.coef.maxela_meta.lin_simple.coef.max_by_minela_meta.lin_w_interact.adj_r2ela_meta.quad_simple.adj_r2ela_meta.quad_simple.condela_meta.quad_w_interact.adj_r2ela_distr.skewnessela_distr.kurtosisela_distr.number_of_peaksela_level.mmce_lda_10ela_level.mmce_qda_10ela_level.lda_qda_10ela_level.mmce_lda_25ela_level.mmce_qda_25ela_level.mmce_lda_50ela_level.mmce_qda_50ela_level.lda_qda_50nbc.nn_nb.sd_rationbc.nn_nb.mean_rationbc.nn_nb.cornbc.dist_ratio.coeff_varnbc.nb_fitness.cordisp.ratio_mean_02disp.ratio_mean_05disp.ratio_mean_10disp.ratio_mean_25disp.ratio_median_02disp.ratio_median_05disp.ratio_median_10disp.ratio_median_25disp.diff_mean_02disp.diff_mean_05disp.diff_mean_10disp.diff_mean_25disp.diff_median_02disp.diff_median_05disp.diff_median_10disp.diff_median_25ic.h_maxic.eps_sic.m00.00.20.40.60.81.0AffineBBOBFigure 7: Distribution of (normalized) ELA feature values on the 5dversion of the problems.From Fig. 6b, we observe that many of the affine problems are clustered together. While someregions between existing BBOB problems are filled, it seems that the function generation process isnot able to find solutions close to every BBOB problem. This might be caused by the fact that bycombining an average of 3.6functions, it is highly unlikely that we find functions similar to e.g., alinear slope or a function with low global structure.In addition to the dimensionality reduction, we can also investigate the distributions of indi-vidual ELA features. By comparing the distributions on the BBOB functions with the ones on theaffine problems, we can gain some insight into the most common types of problems generated. InFig. 7, we show these distributions for the min-max normalized ELA features. From this figure, wecan see that for many features, the affine problems are much more clustered than the BBOB ones,which are distributed more uniformly over the space of feature values.6 Algorithm PerformanceWhile the ELA based analysis gives us some insight into the low-level characteristics of the generatedproblems, it does not directly give insight into the power of these problems to differentiate betweenalgorithms. As such, we also run a set of 5 different algorithms on each problem instance. Thealgorithms we consider are: (1) Diagonal CMA-ES from the Nevergrad platform [ 26] (dCMA), (2)RCobyla from the Nevergrad platform [ 26] (Cobyla), (3) Differential Evolution from the Nevergradplatform [ 26] (DE), (4) CMA-ES from the modular CMA-ES package [ 6] (modCMA), and (5) L-SHADE, implemented using the modular DE package [30] (modDE).For each of these algorithms, we perform 50independent runs on each of the 1 000 affinefunctions as well as the 5instances from each of the 24BBOB problems. It is important to note that71 2 3 4 5Rank (BBOB)0.00.10.20.30.40.50.60.70.80.91.0Proportion1 2 3 4 5Rank (Affine)IDDiagonalCMADifferentialEvolutionRCobylamodcmamodde(a)Distribution of ranks based on per-functionAUC after 10 000 evaluations.0 2 4 6 8 10 12x0−101234567x1bestDiagonalCMAmodcmamoddeRCobylakindAffineBBOB(b)UMAP-reduction of BBOB functions (5 in-stances) and 1000 affine combinations. Projec-tion created based on BBOB only. Color basedon the algorithm with the largest AUC.Figure 8: Results of ranking the 5 algorithms on the 5dproblems, based on AUC after 10 000 evaluations.the BBOB functions make use of the same scale factors as used to generate the affine functions inorder to further reduce the impact of scale differences. These experiments are performed on boththe2dand5dversions of these problems.To analyze the differences in algorithm performance between the two sets of problems, weconsider the normalized area under the curve (AUC) of the empirical cumulative distributionfunction (ECDF) as the performance metric. For the ECDF, we use a set of 51 logarithmicallyspaced targets from 10−8to102. Based on the AUC values, we then rank the set of 5 algorithmson each problem. The distribution of these ranks is shown in Fig. 8a. We observe that the overallpatterns between the BBOB and affine problems are preserved. There are some notable differences,particularly with regard to the performance of Cobyla. While this algorithm often performspoorly on BBOB, for the affine problems it is ranked worst in a majority of cases. This suggeststhat problems where this algorithm performs well (mostly unimodal problems) are not as well-represented in the MA-BBOB functions.In addition to this ranking, we can also link the ELA features to the algorithm performance. Toexplore whether the used features might correlate with the problem’s difficulty from the algorithm’sperspective, we link the dimensionality reduction with the best algorithm from the portfolio. Thisis visualized for the 5dproblems in Fig. 8b.7 Algorithm SelectionAs a final experiment, we now use the generated problems in an algorithm selection context. Foreach of the 5 algorithms, we train a random forest regression model to predict the AUC on eachproblem. The input variables for this model are either the ELA features, as is commonly done, orthe weights used to generate the functions. By contrasting these approaches, we obtain an intuitionfor how well the ELA features capture the algorithm-relevant properties of the function.While we can train our models in a common cross-validation manner, we can also use thesame setup to test the generalizability of models trained on the original BBOB problems only. Theresulting mean absolute errors MAE of these models are plotted in Fig. 9a.We observe that the ELA representation is often worse than the weights-based one. Thissuggests that the used ELA features might not be sufficient to achieve generalization of an ASmodel. This is especially clear for the generalizability scenario, where we would have expectedELA to perform better. This poor performance seems to suggest that the ELA features might notfully capture all instance properties that determine the behavior of the algorithms.8dCMA DEmodCMA modDE Cobyla dCMA DEmodCMA modDE CobylaELAWeightsELAWeights0.000.050.100.150.200.250.30(a)Mean average error obtained when predictingthe AUC of each of the 5 algorithms based oneither the ELA features or the used weights.Top: model trained on mixture of BBOB andAffine functions using 10-fold cross-validation.Bottom: model trained on BBOB only and pre-dicting performance on affine problems. Left:2dproblems, right: 5dproblems.0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8loss0.40.50.60.70.80.91.0Proportion('ela', 'generalize')('ela', 'cv')('weights', 'generalize')('weights', 'cv')(b)Cumulative distribution of loss (AUC) of therandom forest models predicting the bestalgorithm ( 2dand 5dproblems combined),based on either the ELA features or weights-representation of the problems.Figure 9: Performance of the random forest model predicting algorithm performance (a) or the bestalgorithm for each problem (b).When training a very basic AS model (predicting the best algorithm) in the same manner(training on BBOB and evaluating on Affine), we achieve similar performance differences assuggested by Fig. 9a: the weighted F1-score based on ELA is 0.67, while the score based on weightsis0.70. The corresponding loss in terms of AUC values is plotted in Fig. 9b. This figure confirmsthe previous observation that the ELA features are not sufficiently representative to accuratelyrepresent the problems in a way which is relevant for ranking optimization algorithms.8 Conclusions and Future WorkThe proposed procedure for generating new problems as an affine combination of the 24 BBOBproblems can serve as a function generator to help fill the instance space spanned by the BBOBfunctions. By applying a scaling step before combining the problems, we make sure that theresulting problems all have an equivalent range of objective values, regardless of the used weights.In addition, the uniform location of the global optima in the full domain avoids some of the biasof the BBOB problems. By analyzing the ELA features of 1 000 of these many-affine MA-BBOBproblems, we observed that they do indeed fill a part of the instance space. There are still someinherent limitations arising from the fact that the building blocks are fixed. For example, it isimpossible to generate a problem similar to the linear slope. Similarly, it is highly unlikely that newproblems have specific properties such as low global structure. Nevertheless, the overall abilityranking of optimization algorithms on these problems remains similar to the ranking on the BBOBproblems, suggesting that the algorithmic challenges might be similar.The results presented above had as primary focus a first analysis of the generated MA-BBOBinstances, and how they compare to the BBOB functions. For this purpose, we have consideredrandomly sampled instances. The selection of ‘representative’ instance collections still remains tobe done. Another important step for future work is to test the generalization ability of AutoMLsystems that are trained on MA-BBOB functions and tested on numerical black-box optimizationproblems that do not originate from the BBOB family. In this context, our basic Random Forest-basedalgorithm selector indicates that the ELA features might not be as suitable for this generalizationtask as expected, motivating further research on feature engineering for black-box optimization.99 Broader Impact StatementAfter careful reflection, the authors have determined that this work presents no notable negativeimpacts to society or the environment.10 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes](b) Did you describe the limitations of your work? [Yes](c) Did you discuss any potential negative societal impacts of your work? [N/A](d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes]2. If you are including theoretical results. . .(a) Did you state the full set of assumptions of all theoretical results? [N/A](b) Did you include complete proofs of all theoretical results? [N/A]3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes](b)Did you include the raw results of running the given instructions on the given code anddata? [Yes](c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes](d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes](e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes](f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [N/A](g)Did you run ablation studies to assess the impact of different components of your approach?[N/A](h) Did you use the same evaluation protocol for the methods being compared? [Yes](i) Did you compare performance over time? [Yes](j) Did you perform multiple runs of your experiments and report random seeds? [Yes](k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [No] We aggregate data into AUC instead of reporting error bars onfixed-budget or fixed-target results.10(l) Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [No] We did not record the computation timeneeded while running experiments.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A]4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [N/A](b) Did you mention the license of the assets? [N/A](c) Did you include any new assets either in the supplemental material or as a url? [N/A](d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]Acknowledgements. Our work is financially supported by ANR-22-ERCS-0003-01 project VARIA-TION, by the CNRS INS2I project IOHprofiler, and by the NWO DACCOMPLI project (628.011.002).References[1]Hossein Alipour, Mario Andrés Muñoz, and Kate Smith-Miles. 2023. Enhanced instancespace analysis for the maximum flow problem. Eur. J. Oper. Res. 304, 2 (2023), 411–428.https://doi.org/10.1016/j.ejor.2022.04.012[2]Anne Auger and Nikolaus Hansen. 2020. A SIGEVO Impact Award for a Paper Arising fromthe COCO Platform: A Summary and Beyond. https://evolution.sigevo.org/issues/HTML/sigevolution-13-4/home.html . Issue 3.[3]Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance al-gorithm configuration of CMA-ES with limited budget. In Proc. of Genetic and EvolutionaryComputation (GECCO’17) . ACM, 681–688. https://doi.org/10.1145/3071178.3071343[4]Jakob Bossek, Pascal Kerschke, Aneta Neumann, Markus Wagner, Frank Neumann, and HeikeTrautmann. 2019. Evolving diverse TSP instances by means of novel and creative mutationoperators. In Proc. of Conference on Foundations of Genetic Algorithms (FOGA’19) , TobiasFriedrich, Carola Doerr, and Dirk V. Arnold (Eds.). ACM, 58–71. https://doi.org/10.1145/3299904.334030711[5]Jakob Bossek and Markus Wagner. 2021. Generating instances with performance differencesfor more than just two algorithms. In Proc. of Genetic and Evolutionary Computation Conference(GECCO’21, Companion material) , Krzysztof Krawiec (Ed.). ACM, 1423–1432. https://doi.org/10.1145/3449726.3463165[6]Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck. 2021.Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmicmodules. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21, Companionmaterial) . ACM, 1375–1384. https://doi.org/10.1145/3449726.3463167[7]Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck.2021. IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics. CoRRabs/2111.04077 (2021). arXiv:2111.04077 https://arxiv.org/abs/2111.04077[8]Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of Benchmark Func-tion Sets Through Affine Recombination. In Proc. of Parallel Problem Solving from Nature(PPSN’22) (LNCS, Vol. 13398) , Günter Rudolph, Anna V. Kononova, Hernán E. Aguirre, PascalKerschke, Gabriela Ochoa, and Tea Tusar (Eds.). Springer, 590–602. https://doi.org/10.1007/978-3-031-14714-2_41[9]Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff.2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optim.Methods Softw. 36, 1 (2021), 114–144.[10] Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. 2009. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions . Technical Report RR-6829.INRIA. https://hal.inria.fr/inria-00362633/document[11] Annika Jacobsen, Ricardo de Miranda Azevedo, Nick S. Juty, Dominique Batista, Simon J.Coles, Ronald Cornet, Mélanie Courtot, Mercè Crosas, Michel Dumontier, Chris T. A. Evelo,Carole A. Goble, Giancarlo Guizzardi, Karsten Kryger Hansen, Ali Hasnain, Kristina M. Hettne,Jaap Heringa, Rob W. W. Hooft, Melanie Imming, Keith G. Jeffery, Rajaram Kaliyaperumal,Martijn G. Kersloot, Christine R. Kirkpatrick, Tobias Kuhn, Ignasi Labastida, Barbara Magagna,Peter McQuilton, Natalie Meyers, Annalisa Montesanti, Mirjam van Reisen, Philippe Rocca-Serra, Robert Pergl, Susanna-Assunta Sansone, Luiz Olavo Bonino da Silva Santos, JulianeSchneider, George O. Strawn, Mark Thompson, Andra Waagmeester, Tobias Weigel, Mark D.Wilkinson, Egon L. Willighagen, Peter Wittenburg, Marco Roos, Barend Mons, and ErikSchultes. 2020. FAIR Principles: Interpretations and Implementation Considerations. DataIntell. 2, 1-2 (2020), 10–29. https://doi.org/10.1162/dint_r_00024[12] Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. AutomatedAlgorithm Selection: Survey and Perspectives. Evol. Comput. 27, 1 (2019), 3–45. https://doi.org/10.1162/evco_a_00242[13] Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov,and Carola Doerr. 2022. Per-run Algorithm Selection with Warm-starting using Trajectory-based Features. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS, Vol. 13398) .Springer, 46–60. https://doi.org/10.1007/978-3-031-14714-2_4 Free version availableathttps://arxiv.org/abs/2204.09483 .[14] Ana Kostovska, Diederick Vermetten, Carola Doerr, Sašo Džeroski, Panče Panov, and TomeEftimov. 2022. OPTION: OPTImization Algorithm Benchmarking ONtology. IEEE Trans. Evol.Comput. (2022). https://doi.org/10.1109/TEVC.2022.3232844 To appear. Free versionavailable at https://arxiv.org/abs/2211.11332 .12[15] Benjamin Lacroix and John McCall. 2019. Limitations of Benchmark Sets and LandscapeFeatures for Algorithm Selection and Performance Prediction. In Proc. of Genetic and Evolution-ary Computation (GECCO’19) (Prague, Czech Republic). ACM, New York, NY, USA, 261–262.https://doi.org/10.1145/3319619.3322051[16] Thibault Lechien, Jorik Jooken, and Patrick De Causmaecker. 2023. Evolving test instancesof the Hamiltonian completion problem. Comput. Oper. Res. 149 (2023), 106019. https://doi.org/10.1016/j.cor.2022.106019[17] Fu Xing Long, Bas van Stein, Moritz Frenzel, Peter Krause, Markus Gitterle, and Thomas Bäck.2022. Learning the characteristics of engineering optimization problems with applications inautomotive crash. In Proc. of Genetic and Evolutionary Computation (GECCO’22) , Jonathan E.Fieldsend and Markus Wagner (Eds.). ACM, 1227–1236. https://doi.org/10.1145/3512290.3528712[18] Fu Xing Long, Diederick Vermetten, Bas van Stein, and Anna V. Kononova. 2022. BBOBInstance Analysis: Landscape Properties and Algorithm Performance across Problem In-stances. CoRR abs/2211.16318 (2022). https://doi.org/10.48550/arXiv.2211.16318arXiv:2211.16318[19] Alejandro Marrero, Eduardo Segredo, Coromoto León, and Emma Hart. 2022. A Novelty-Search Approach to Filling an Instance-Space with Diverse and Discriminatory Instancesfor the Knapsack Problem. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS,Vol. 13398) . Springer, 223–236. https://doi.org/10.1007/978-3-031-14714-2_16[20] Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and GünterRudolph. 2011. Exploratory landscape analysis. In Proc. of Genetic and Evolutionary Computa-tion (GECCO’11) . ACM, 829–836.[21] Mario A. Muñoz and Kate Smith-Miles. 2020. Generating New Space-Filling Test Instances forContinuous Black-Box Optimization. Evol. Comput. 28, 3 (2020), 379–404. https://doi.org/10.1162/evco_a_00262[22] Mario Andrés Muñoz, Tao Yan, Matheus R. Leal, Kate Smith-Miles, Ana Carolina Lorena,Gisele L. Pappa, and Rômulo Madureira Rodrigues. 2021. An Instance Space Analysis ofRegression Problems. ACM Trans. Knowl. Discov. Data 15, 2 (2021), 28:1–28:25. https://doi.org/10.1145/3436893[23] Ana Nikolikj, Carola Doerr, and Tome Eftimov. 2023. RF+ clust for Leave-One-Problem-OutPerformance Prediction. In Proc. of Applications of Evolutionary Computation (Evo Applica-tions’23) . Springer, 285–301.[24] Raphael Patrick Prager. 2022. pFlacco. https://pypi.org/project/pflacco/ .[25] Raphael Patrick Prager and Heike Trautmann. 2023. Nullifying the Inherent Bias of Non-invariant Exploratory Landscape Analysis Features. In Proc. of Applications of EvolutionaryComputation (Evo Applications’23) . Springer, 411–425.[26] Jérémy Rapin and Olivier Teytaud. 2018. Nevergrad - A gradient-free optimization platform.https://GitHub.com/FacebookResearch/Nevergrad .[27] Quentin Renau, Johann Dreo, Carola Doerr, and Benjamin Doerr. 2019. Expressiveness and Ro-bustness of Landscape Features. In Proc. of Genetic and Evolutionary Computation (GECCO’19)(Prague, Czech Republic). ACM, 2048–2051. https://doi.org/10.1145/3319619.332691313[28] Gresa Shala, André Biedenkapp, Noor H. Awad, Steven Adriaensen, Marius Lindauer, andFrank Hutter. 2020. Learning Step-Size Adaptation in CMA-ES. In Proc. of Parallel ProblemSolving from Nature (PPSN’20) (LNCS, Vol. 12269) . Springer, 691–706. https://doi.org/10.1007/978-3-030-58112-1_48[29] Ye Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin.2020. A Recommender System for Metaheuristic Algorithms for Continuous OptimizationBased on Deep Recurrent Neural Networks. IEEE Trans. Artif. Intell. 1, 1 (2020), 5–18. https://doi.org/10.1109/TAI.2020.3022339[30] Diederick Vermetten. 2023. modular Differential Evolution. https://github.com/Dvermetten/ModDE .[31] Diederick Vermetten, Furong Ye, Thomas Bäck, and Carola Doerr. 2023. Reproducibil-ity files and additional figures. Code repository: https://github.com/Dvermetten/Many-affine-BBOB Data and figure repository: https://doi.org/10.5281/zenodo.7826036 .[32] Diederick Vermetten, Furong Ye, and Carola Doerr. 2023. Using Affine Combinations of BBOBProblems for Performance Assessment. CoRR abs/2303.04573 (2023). https://doi.org/10.48550/arXiv.2303.04573 arXiv:2303.04573[33] Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, and Thomas Bäck. 2022. IOH-analyzer: Detailed Performance Analysis for Iterative Optimization Heuristic. ACM Trans.Evol. Learn. Optim. 2, 1 (2022), 3:1–3:29. https://doi.org/10.1145/3510426 IOHanalyzeris available at CRAN, on GitHub, and as web-based GUI, see https://iohprofiler.github.io/IOHanalyzer/ for links.[34] Estefania Yap, Mario Andrés Muñoz, and Kate Smith-Miles. 2022. Informing MultiobjectiveOptimization Benchmark Construction Through Instance Space Analysis. IEEE Trans. Evol.Comput. 26, 6 (2022), 1246–1260. https://doi.org/10.1109/TEVC.2022.3205165[35] Martin Zaefferer and Frederik Rehbach. 2020. Continuous Optimization Benchmarks by Simula-tion. In Proc. of Parallel Problem Solving from Nature (PPSN’20) (LNCS, Vol. 12269) , Thomas Bäck,Mike Preuss, André H. Deutz, Hao Wang, Carola Doerr, Michael T. M. Emmerich, and HeikeTrautmann (Eds.). Springer, 273–286. https://doi.org/10.1007/978-3-030-58112-1_1914
TgeTw7yIKEv
71eJdMzCCIi
automl.cc/AutoML/2023/ABCD_Track
2023
AlphaD3M: An Open-Source AutoML Library for Multiple ML Tasks
["Roque Lopez", "Raoni Lourenco", "Remi Rampin", "Sonia Castelo", "A\u00e9cio S. R. Santos", "Jorge Henrique Piazentin Ono", "Claudio Silva", "Juliana Freire"]
We present AlphaD3M, an open-source Python library that supports a wide range of machine learning tasks over different data types. We discuss the challenges involved in supporting multiple tasks and how AlphaD3M addresses them by combining deep reinforcement learning and meta-learning to effectively construct pipelines over a large collection of primitives. To better integrate the use of AutoML within the data science lifecycle, we have built an ecosystem of tools around AlphaD3M that support user-in-the loop tasks, including the selection of suitable pipelines and the development of solutions for complex systems. We present use cases that demonstrate some of these features. We report the results of detailed experimental evaluations which show that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance that is comparable or superior to state-of-the-art AutoML systems.
["AutoML", "Python Library", "Multiple ML Tasks"]
AlphaD3M: An Open-Source AutoML Libraryfor Multiple ML TasksRoque Lopez1Raoni Lourenço2Remi Rampin1Sonia Castelo1Aécio Santos1Jorge Ono1Claudio Silva1Juliana Freire11New York University2University of LuxembourgAbstract We present AlphaD3M, an open-source Python library that supports a wide range of machinelearning tasks over different data types. We discuss the challenges involved in supportingmultiple tasks and how AlphaD3M addresses them by combining deep reinforcement learningand meta-learning to construct pipelines over a large collection of primitives effectively.To better integrate the use of AutoML within the data science lifecycle, we have builtan ecosystem of tools around AlphaD3M that support user-in-the-loop tasks, includingselecting suitable pipelines and developing custom solutions for complex problems. Wepresent use cases that demonstrate some of these features. We report the results of adetailed experimental evaluation showing that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance comparable or superior tostate-of-the-art AutoML systems.1 IntroductionAutomated Machine Learning (AutoML) has emerged as an alternative to automatically synthesizemachine learning (ML) pipelines, thereby democratizing ML techniques to non-experts as wellas increasing the productivity of data scientists. Different approaches have been proposed forAutoML systems. Some focus on specific components of an ML pipeline, such as hyperparameteroptimization or model selection, while others, given a dataset and a prediction task, generateend-to-end pipelines that encompass data pre-processing, feature, and model selection (Hutteret al., 2019). Most end-to-end systems are designed to work with tabular data and only supportclassification and regression problems (Feurer et al., 2015; LeDell and Poirier, 2020; Olson and Moore,2016; Kotthoff et al., 2017). Cloud AutoML (Google Cloud AutoML, 2020) and AutoGluon (Ericksonet al., 2020) also create pipelines to classify text and images and perform object detection tasks.However, these systems do not support more complex data types such as graphs, time series, audio,and video, limiting the types of problems they can address. Table 1 shows the set of task typessupported by different AutoML systems.In the context of DARPA’s Data-Driven Discovery of Models (D3M) program (Elliott, 2020),several AutoML systems have been developed to support a wide range of data types and MLtasks using an extensive set of computational primitives as building blocks – we refer to theseasmulti-task AutoML systems (MT-AutoML). MT-AutoML systems face an essential challenge:effectively searching an ample space of primitives required to synthesize pipelines for a broadrange of tasks and data types. To prune the search space, many D3M MT-AutoML systems usemanually-crafted templates and grammars (D3M, 2022) that prescribe combinations of primitivesthat make sense for different problems. This, in turn, leads to other challenges: creating thesetemplates or grammars is not only time-consuming but failing to include the necessary rules thatcover the relevant primitives (and their combination) for multiple task types can negatively impactthe ability of an MT-AutoML system to derive performant pipelines.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Table 1: Tasks supported by different AutoML Systems.SystemsTabularClassificationTextclassificationImageclassificationAudioclassificationVideoclassificationTabularRegressionClusteringTime seriesforecastingTime seriesclassificationObjectdetectionLUPICommunitydetectionLinkpredictionGraphmatchingVertexclassificationCollaborativefilteringSemisupervisedclassificationAutoGluon ✓✓✓ ✓ ✓ ✓AutoWEKA ✓ ✓Auto-Sklearn ✓ ✓Cloud AutoML ✓✓✓ ✓✓ ✓H2O ✓✓ ✓TPOT ✓ ✓AlphaD3M ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓ ✓✓✓ ✓✓✓✓✓✓ ✓✓✓We present AlphaD3M, an open-source AutoML library1that supports a wide range of dataand problem types (see Table 1). AlphaD3M introduces new techniques to navigate the large searchspaces MT-AutoML systems must navigate effectively. They include an algorithm that appliesmeta-learning to automatically derive task-based context-free grammars (CFGs) which cover amultitude of problems; and a novel search strategy that, based on previously generated pipelinesand their performance, prioritizes primitives that are correlated with good pipeline performance.AlphaD3M includes components that aim to support usability and integration with other tasksin the data science lifecycle, from data exploration and model summarization to model deployment.It is possible to extend AlphaD3M and combine it with other tools through its flexible API. Forexample, its integration with the PipelineProfile (Ono et al., 2021) allows users to explore andcompare the set of derived pipelines visually. Besides describing the API and these components, wealso present case studies demonstrating how users can improve the ML solutions via interaction inAlphaD3M.We conducted a detailed experimental evaluation to assess the ability of AlphaD3M to handlea rich set of tasks and data types as well as to compare its performance against state-of-the-artAutoML and MT-AutoML systems. We used two benchmarks: (a) a collection of 112 datasetsthat covers seventeen different ML tasks, and (b) the OpenML AutoML Benchmark for tabularclassification problems. Our results show that the search strategies used by AlphaD3M are effective:the system generates pipelines whose performance is superior or on par with those derived byother systems, including systems that focus on a small set of problems and have to navigate a muchsmaller search space.2 Related WorkTask Coverage. Many AutoML systems have been proposed to work with tabular data, for example:Auto-sklearn (Feurer et al., 2015), TPOT (Olson and Moore, 2016), and H2O (LeDell and Poirier,2020). The deep reinforcement learning algorithm proposed by Drori et al. (2019) aimed to supportmultiple learning tasks and data types, however, its implementation was limited to classificationand regression tasks over tabular and text data. AutoML systems developed in industry, such asCloud AutoML by Google and AutoGluon by Amazon, handle text and image data, but still supporta limited number of learning tasks. In contrast, AlphaD3M supports a wide range of data types(tabular, text, images, audio, video, and graph) and a rich set of ML tasks as shown in Table 1.Data and Model Exploration. Interactive data analytics systems such as Visus (Santos et al., 2019),TwoRavens (Gil et al., 2019), and Snowcat (Cashman et al., 2018) have been developed to guideusers throughout the model-building process, from exploring the input data to comparing the MLpipelines produced by AutoML systems. They target primarily domain experts who have little or1https://gitlab.com/ViDA-NYU/d3m/alphad3m2no expertise in ML and thus lack support for the customization of pipelines for complex problems.These systems trade off flexibility for ease of use. As such, they are limited to the operationsimplemented in their visual interfaces; extensive and time-consuming changes in their workflowsare required to support new data types and tasks (e.g., graph data). Other approaches mimic theinterface of traditional ML libraries, through which developers often build a single solution for agiven task (Grafberger et al., 2021). AlphaD3M allows ML experts to explore the derived pipelinesand customize them through a user-friendly interface within a Jupyter Notebook environment. Inaddition, instead of retrieving only the best pipeline, AlphaD3M returns all valid pipelines, ranks,and presents them to the user for comparison, refinement, and selection.3 The AlphaD3M LibraryFigure 1: Overview of AlphaD3M.AlphaD3M is a multi-task Au-toML system. It is imple-mented in Python and canbe used via pipinstallationor Docker. Figure 1 showsan overview of this libraryand its components. Tobuild ML pipelines, AlphaD3Muses a rich set of primitivesand a meta-learning databasefrom the D3M ecosystem D3M(2022). The pipeline search is conducted by four modules which: (a) automatically construct oftask-specific grammars; (b) prioritize primitives that are more likely to be effective; (c) synthesizepipelines using Monte Carlo Tree Search and Neural Networks (Drori et al., 2019); and (d) tunehyperparameters. The library implements a Python API through which users can define the problemto be solved, explore the input data, obtain model summaries, analyze and compare the producedpipelines, as well as improve and deploy them.3.1 The D3M EcosystemPrimitives. AlphaD3M uses a comprehensive collection of primitives developed by performersin the D3M program as well as from open-source libraries (e.g., scikit-learn). In total, there are312 primitives available for different steps in ML pipelines, including data pre-processing, featureextraction, feature selection, prediction, and clustering (D3M Primitives, 2022), and implementstate-of-the-art methods, such as ResNet50 (He et al., 2016), ARIMA (Wilson, 2016), among others.The Marvin Meta-Learning Database. Marvin is an open corpus of curated ML pipelines, datasets,and problems (Marvin, 2020). All pipelines in Marvin share the same set of primitives and arespecified using the D3M format. Marvin stores approximately 2.5 million pipelines executed over600 datasets. Since data scientists and AutoML systems that use different search strategies haveproduced these pipelines, the database covers a wide variety of pipeline patterns. As discussedbelow, we leverage the data in Marvin to assist in and improve the AlphaD3M search process. Tothe best of our knowledge, ours is the first work that explores this corpus.3.2 Pipeline SearchThe automatic synthesis of pipelines is a combinatorial problem in which we must find the bestcombinations of primitives and their hyperparameters. With 312 primitives and over 1,500 hy-perparameters in the D3M ecosystem, the search space becomes prohibitively large. For instance,considering just the classification task over tabular data, there are 22 data cleaning, 87 data trans-formation, and 44 classifier primitives, leading to 84,216 possible pipelines to test. AlphaD3M usesa multi-pronged approach to manage this search space described below.3APipeline Synthesis Using Monte Carlo Tree Search and Neural Networks. To synthesize theML pipelines, AlphaD3M uses the strategy introduced by Drori et al. (2019), which is based on asingle-player game technique inspired by AlphaZero (Silver et al., 2017). It applies model-basedreinforcement learning with a neural network sequence model, and a Monte Carlo Tree Search(MCTS). The metadata encoding the pipeline, the dataset, and the task are analogous to an entiregame board configuration in AlphaZero. The possible game states consist of all valid pipelinesgenerated from a set of primitives and modified by actions guided by a manually-designed CFG.The model outputs a sequence of primitives. Pipelines are constructed by an LSTM. Given a state scomposed of a vector encoding the whole board configuration (dataset, task, pipeline), the neuralnetwork predicts the probabilities P(s,a)over actions afrom a state s. This process produces aset of action sequences Sthat describe a pipeline, which in turn solves task Ton datasetD. Thenetwork also outputs an estimate of pipeline performance v. The reinforcement learning algorithmtakes the predictions (P(s,a),v(s))produced be the neural network and uses them in the MCTS byrunning multiple simulations to search for the pipeline sequence Rwith the best evaluation. Animportant benefit of this strategy is that it learns to synthesize pipelines.BAutomatic Generation of Task-Based CFG via Meta-Learning. Manually designed CFGs havemany limitations, notably they may not cover all applicable rules and pipeline structures andconsequently prevent the search process from exploring desirable pipelines that do not fit thegrammar. Furthermore, to create the production rules or patterns in the grammar, a user needsto have knowledge of all the available primitives for a specific task and how they work. For largeprimitive collections, this is a difficult task, which is compounded for MT-AutoML systems thatsupport multiple problem types. Instead of relying on manually created CFGs, we propose a newstrategy that uses meta-learning to derive grammars automatically and on the fly. It does so in twosteps: 1) it selects task-specific pipelines and datasets from a meta-learning database (MLDB), and2) uses these to derive a portfolio of pipeline patterns.Selecting Task-Oriented Datasets. Since AlphaD3M supports different tasks, we need to retrievefrom the Marvin MLDB pipelines produced for tasks and datasets similar to the ones we provided asinputs to the AutoML system. For instance, if we want to solve a clustering problem over a datasetD, we retrieve the pipelines used for this problem over datasets similar to D. To select relevantpipelines for a given problem Pover dataset D, we use the “task keywords" tag list provided in theproblem definition as features that describe the task to be solved, and search Marvin for pipelinesthat contain a similar set of keywords. The list is encoded as a bag-of-words (BOW). Since the setis small and most of the tags are non-standard words, e.g., collaborativeFiltering, timeSeries , it ispossible to obtain accurate matches with this simple approach.Given the set of relevant pipelines RP, we select a subset RPDcontaining pipelines that wereapplied on datasets similar to D. To determine whether two datasets are similar, we use datasetfeatures including semantic types (e.g., categorical, date-time) and missing values, and encode themusing one-hot encoding. Datasets are compared using cosine similarity.The current implementation uses 16 unique semantic types detected by the data-mart_profiler (Datamart Profiler Library, 2021). In contrast to other approaches like TabSim(Habibi et al., 2020), or StruBERT (Trabelsi et al., 2022), AlphaD3M uses semantic types because, inthe grammar, it defines components to handle the dataset’s features, such as categorical or date-timeencoders, and these components are strongly related to semantic types. Also, these approachesfocus on tabular datasets, AlphaD3M handles other types of datasets, like image and text datasets.Finally, running these approaches is a very time-consuming task.Creating a Portfolio of Patterns. After identifying similar datasets, the next step is to select the bestpipelines to create a portfolio of pipeline patterns. To select these AlphaD3M takes into considerationpipeline performance for different datasets. Some datasets are more challenging than others – theperformance of a pipeline can vary widely for different datasets. To properly compare pipeline4performance, AlphaD3M uses a strategy based on the average distance to minimum (ADTM) (Wistubaet al., 2015), which transforms the performance to the distance to the best-observed performancescaled between 0 and 1. In contrast to ADTM, which uses the misclassification rate, AlphaD3Muses the actual performance (the score) of the pipelines and thus, it applies the average distance tomaximum instead to select the best pipelines. It then transforms the primitives within the pipelinesto their classes. For instance, the primitive imputer.SKlearn belongs to the class IMPUTATION . Ifthere is a pipeline with this structure: [ imputer.SKlearn svm.SKlearn ], it is converted to this pattern:[IMPUTATION CLASSIFICATION ]. Unlike Feurer et al. (2021), which creates a unique portfolioof pipelines in an offline phase, AlphaD3M creates the portfolio online, based on the query taskand dataset. Also, the output is a portfolio of patterns, not of static pipelines, which allows moreflexibility to construct pipelines. These patterns are used as production rules of the grammar.Algorithm 1 in the Appendix describes the process of building the grammar.CPrioritization of Primitives. When a data scientist builds an ML pipeline, they start this processusing primitives that are known to perform well. For example, XGBoost or Random Forests aregood initial candidates for classification tasks. AlphaD3M follows this intuition to identify goodcandidate primitives for a specific task, using the data from Marvin. This prior knowledge aboutpromising primitives can be helpful to find better pipelines faster.Similar to Ono et al. (2021), AlphaD3M uses Pearson Correlation (PC) to estimate how mucha primitive contributes to the score of the pipeline. However, instead of using the raw scores, ituses the ADTMs values because they are scaled across different datasets. AlphaD3M estimatesthe primitive importance using PC between the primitive indicator vector p(pi=1if pipelineicontains the primitive in question and pi=0otherwise) and the pipeline score vector s, wheresiisthe score for pipeline i. Sincepandsare dichotomous and quantitative variables, respectively, thePoint-Biserial Correlation coefficient (PBC) Sheskin (2003) is an appropriate correlation measure – itis mathematically equivalent to the PC but can be calculated with fewer operations. The correlationvalues are normalized between 0 and 1 (using min-max normalization).AlphaD3M calculates these correlations for the primitives at two levels: (a) global, when itconsiders all the pipelines, and (b) local, when it considers only the pipelines for each pattern.The main goal is to estimate how important a primitive is for all the pipelines and each pattern.Primitives with higher values of importance should have priority during the search of pipelines.Algorithm 2 describes the process of calculating the primitive importance values in detail (see theAppendix). To prioritize the usage of potential primitives in AlphaD3M, it includes these values ofprimitive importance in the MCTS formula:U(s,a)=Q(s,a)+c(αP(s,a)+( 1−α)R(a))√︁N(s)1+N(s,a)(1)whereQ(s,a)is the expected reward for action a(selection of primitive a) from state s,N(s,a)isthe number of times action awas taken from state s,N(s)is the number of times state swas visited.P(s,a)are the probabilities predicted by the neural network over actions afrom a state s,cis aconstant which determines the amount of exploration, R(a)=G(a)∗L(a),G(a)andL(a)are theglobal and local importance of the action a, andαis a coefficient to keep the trade-off betweenR(a)andP(s,a).DDecoupled Hyperparameter Tuning. Hyperparameter tuning is an essential part of fitting machinelearning models (Bergstra et al., 2011; Snoek et al., 2015; Dolatnia et al., 2016). This is also the casefor end-to-end ML pipelines that target different tasks, and all primitives contain hyperparameters,not just the estimators.AlphaD3M performs hyperparameter tuning as an independent task, after the pipelines areconstructed. It uses Bayesian optimization, which is the state-of-the-art for hyperparameter tuning5Figure 2: (a) A code snippet to solve a semi-supervised classification task. (b) AlphaD3M allows usersto inspect the contents of the input dataset, including column statistics and data types. (c)Analyzing ML pipelines through the integration with PipelineProfiler.(Bergstra and Bengio, 2012; Snoek et al., 2015; Dolatnia et al., 2016) and was shown to outperformmanual setting of parameters, grid search, and random search (Bergstra and Bengio, 2012; Turneret al., 2021).Tuning Top- kPipelines. AlphaD3M synthesizes and evaluates the pipelines using primitives withdefault values for hyperparameters. The pipelines are then ranked by performance, and the top-kpipelines are selected for tuning. AlphaD3M uses Sequential Model-Based Algorithm Configuration(SMAC) (Lindauer et al., 2022), a Python library for Bayesian optimization. It approximates aprobability model of the performance outcome given a parameter configuration that is updatedfrom a history of executions. AlphaD3M selects the Gaussian Processes models from SMAC tominimize an arbitrary acquisition function using the Expected Improvement criterion to choose theparameter values for each iteration until a condition (number of iterations) is met. The acquisitionfunction is designed to normalize the performance metric used to synthesize the pipelines betweenzero and one, as the pipeline execution evaluations increase, the acquisition function gets closer tozero. SMAC requires a set of unique parameters to assign values during its tuning procedure. SinceAlphaD3M considers multiple primitives with identical names, it constructs an internal hierarchicalnomenclature of parameters and designs their dependencies using ConfigSpace.3.3 The APIWe have developed a Python-based API that supports the process of building and exploration of MLpipelines within a Jupyter Notebook environment. The API is integrated with the D3M AutoMLsystems and supports various dataset formats such as raw CSV, D3M, and OpenML. Model synthesiscan be done with a few lines of code, as shown in Figure 2(a). The API allows users to (a) define aproblem, (b) explore summaries of their input dataset, (c) summarize the produced pipelines and (d)analyze and compare pipelines with respect to their performance scores and prediction outputs.We describe the main components of the API below.Problem Definition. To build a predictive model, AlphaD3M needs a problem specification thatdescribes a prediction problem, specifically: (a) the training dataset; (b) a target variable, i.e., whatshould be predicted by the predictive model; (c) the maximum running time that controls how longthe search can take (to control the use of computational resources); (d) the desired performancemetric; and (e) a list of task keywords that specify the kind of prediction task and, therefore, thetechniques that should be used to solve the prediction problem. Figure 2(a) shows an example ofhow to define a problem in AlphaD3M.6Table 2: Comparison of MT-AutoML systems with respect to the number of supported task types,winner pipelines, and average rank by each system.AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Unique ML tasks supported 17 16 15 17 15 16 14 2Winner pipelines 49 39 30 21 20 11 10 7Average rank 2.85 2.89 2.90 3.99 4.68 5.32 5.73 6.85Data Exploration. To build good predictive models, it is important to identify data attributes thatlead to accurate predictions. The API provides multiple tools for data exploration. For example, itshows different visualizations (compact, detail, and column views) that summarize the content oftabular datasets (see Figure 2 (b)).Pipeline Summary. After the pipeline search is complete, users can display a leaderboard, trainindividual pipelines with the complete data, perform predictions and evaluate them against aheld-out dataset.Pipeline Exploration. Users can analyze the produced pipelines using the PipelineProfiler Onoet al. (2021), which is fully integrated into AlphaD3M as shown in Figure 2(c). PipelineProfiler isa visual analytics tool that enables users to compare and explore the pipelines generated by theAutoML systems.Pipeline Refinement and Deployment. AlphaD3M allows users to save and load pipelines, enablingusers to reload them later and perform analyses without having to re-run the AutoML search.They can load the saved pipelines at any time for training or testing purposes. In addition, userscan export pipelines to Python code. This gives them more control and the ability to modify(and customize) the automatically generated pipelines (e.g., change hyperparameters, or replacea classifier primitive). More information about the API can be found on the documentation webpage: https://alphad3m.readthedocs.io/en/latest/api.html .4 EvaluationTo demonstrate the effectiveness of AlphaD3M and its ability to handle a rich set of ML tasks, wecompared AlphaD3M with state-of-the-art AutoML systems using two dataset collections. We alsopresent use cases to show how useful, flexible, and easy to use AlphaD3M is.4.1 Comparing AutoML SystemsD3M Datasets. This collection contains challenging datasets and cover a wide variety of tasks (atotal of 17 task types) and data types (see Table 3). We evaluated all the systems using train and testsplits. In most of the cases, the sizes are 0.8 and 0.2 for the train and test splits, respectively (see thedataset’s repository2for details). For each dataset, we ran the systems over the train split for onehour, a time-bound used by others works (Erickson et al., 2020; Feurer et al., 2021). After that, weevaluated the best pipeline produced by each system in the test split. For this experiment, we used1 GPU (GeForce GTX 1080 Ti), 14 CPU cores (Intel Xeon E5-2695 v4, 2.10 GHz), and 56 GB memory.Table 2 shows the number of supported task types (ML tasks), winner pipelines (i.e., pipelineswith the best performance for a given dataset), and the average rank by each AutoML system (rankof each system among the 8 AutoML systems applied to each dataset). If two or more systemsproduce pipelines that tie in the best score, all of them are considered winner pipelines. As we cansee, AlphaD3M and Aika were able to solve 17 out of 17 unique tasks, obtaining the best coverage.We also evaluated the effectiveness of AlphaD3M. It had the best overall performance, producingthe best pipeline for 49 datasets with the best average rank (2.85). Analyzing the support for each2https://datasets.datadrivendiscovery.org/d3m/datasets7Table 3: Number of datasets by task type and number of solved datasets by each AutoML system forall task types covered by the D3M datasets.ML Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Tabular Classification (20) 20 19 18 20 18 17 13 20Tabular Regression (11) 11 11 11 8 9 6 5 9Image Classification (9) 9 8 9 9 7 7 2 0Image Regression (1) 1 1 1 1 1 1 1 0Text Classification (9) 9 9 9 9 8 8 9 0Audio Classification (2) 2 2 2 2 1 2 2 0Graph Matching (3) 3 3 3 3 2 2 2 0Time series Forecasting (13) 13 13 13 13 2 12 10 0Link Prediction (3) 3 3 3 3 2 2 2 0Collaborative Filtering (1) 1 0 1 1 0 1 0 0Time series Classification (19) 19 19 19 17 19 15 19 0Community Detection (3) 3 3 0 2 2 1 0 0Video Classification (2) 2 2 2 2 0 2 2 0Vertex Classification (4) 4 4 4 4 4 4 4 0Object Detection (2) 2 2 0 1 1 0 0 0Semisupervised Classification (6) 6 6 6 3 6 4 3 0LUPI (4) 4 4 4 4 4 4 4 0task type individually in Table 3, we can see that AlphaD3M was able to produce valid pipelinesfor all the datasets and it solved more datasets than the other systems. Even though AlphaD3M isinspired by Drori et al. (2019), in Table Table 2 and Table 3, we can clearly see the difference betweenthem, AlphaD3M handles a larger number of tasks and produces many more winned pipelines.This shows that the different components of AlphaD3M are effective at handling the larger searchspaces required by MT-AutoML systems. The detailed scores obtained by each system in all theD3M datasets and the average rank by tasks can be found in Table 4 and Table 5 (Appendix).Additionally, we calculated the number of winner pipelines for the top-3 systems only in thedatasets where all of them produced pipelines. AlphaD3M, Ensemble, and AutonML systems got 48,42, and 38, respectively. These results confirm that the superior performance of AlphaD3M is notsolely due to its support for a broader range of ML tasks.Figure 3: Ablation study for the different components of AlphaD3M.We performed an ablationstudy to analyze the contribu-tion of each component of Al-phaD3M on a random sample offive D3M datasets for classifica-tion tasks2(datasets for whichAlphaD3M obtained the best, av-erage and worst performances).Figure 3 shows the best scoresfor each dataset reached by thefull AlphaD3M and the versionswith some components removed(or replaced). As we can see, us-ing all components leads to thebest results.To evaluate the importance of the automatic grammar, we replaced it with the manually-designed grammar used in Drori et al. (2019). For POKER ,SPECTRO ,WORDS , and SICK datasets,when the manual grammar was used, AlphaD3M was not able to produce valid pipelines, whichhighlights the importance of automatically generating the grammar. These datasets contain multi-ple types of features like text, DateTime, etc., which were not covered by the manually-constructed8Figure 4: Performance of AutoML systems in OpenML Benchmark. X-axis shows the accuracy values(normalized by the best score), and Y-axis shows the IDs of the OpenML tasks.grammar. The prioritization of primitives also plays an important role in AlphaD3M. When thisfeature was not used, the performance decreased, e.g. in POKER ,SPECTRO , and LIBRAS datasets. Aswe can see in Figure 3, in most of the datasets, when we removed the hyperparameter tuning com-ponent, AlphaD3M obtained the same results. This suggests that the heuristic used by AlphaD3M(tuning only the top- kpipelines) may miss good pipelines that would attain better performanceafter tuning. In future work, we plan to investigate alternative strategies for hyperparameter tuningthat attain a better balance of computational cost and pipeline performance.OpenML Benchmark. Similar to Erickson et al. (2020), we compared our system with AutoWEKA,TPOT, H2O, AutoGluon, and Auto-Sklearn 2.0 (hereinafter referred to as Auto-Sklearn) on the 39OpenML datasets (Gijsbers et al., 2019). This corpus contains a variety of datasets intended torepresent real-world data science problems and covers binary and multiclass classification tasks.We used AMLB (Gijsbers et al., 2022) to compare the systems, running them locally for one hourusing 1 fold split and accuracy as the optimization metric. For this experiment, we used 4 CPUcores (Intel Xeon Platinum 8268 Processor, 2.9 GHz) and 32 GB memory.Figure 4 shows the scores (normalized by the best score) of all the systems (the detailed scorescan be found in Tables 6 and 7 in the Appendix). As we can see, AlphaD3M produced pipelineswhose performance is on par with the other AutoML systems. We also calculated the averagerank for all the systems for the 39 datasets. AlphaD3M got 3.64 of average rank, while Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA got 2.08, 2.33, 3.08, 3.72, and 5.10, respectively.To understand better these numbers, we also estimated the performance gain of the pipelines foundby AlphaD3M against pipelines generated by other systems. The average gain of AlphaD3M forthe OpenML datasets was +0.001, which shows that, in general, AlphaD3M attained good resultsfor this collection. We analyzed the 3 datasets ( task_146195 ,task_167119 andtask_168331 ) forwhich AlphaD3M generated pipelines with performance lower than other systems. This happenedbecause these datasets are imbalanced with multiple classes. The performance of AlphaD3M forthese could be improved with the inclusion of primitives to handle imbalanced datasets. Thisunderscores the importance of being able to add primitives to AutoML systems.Concerning the coverage, it is important to highlight that AlphaD3M succeeded for 38 datasets.Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA solved 39, 39, 34, 29, and 28 datasets,respectively. As pointed out by Gijsbers et al. (2022), the results of Auto-Sklearn on the OpenMLdatasets must be considered very carefully, since there could be an overlap between the datasetsused in its meta-learning process and the ones used in the evaluation. It’s important to highlightthat none of the OpenML datasets are included in the version of Marvin that was used by AlphaD3Min these experiments.94.2 Use CasesPivoting across ML tasks. Predicting hostile actions against ships and mariners worldwide isimportant to prevent piracy and prosecute the aggressors. Consider that an analyst from the U.S.National Geospatial-Intelligence Agency (NGA) is building a model using the Anti-Shipping ActivityMessages dataset (ASAM, 2021). She wants to identify which records mention guns and whichrecords do not. This is a non-trivial problem since a variety of terms (e.g., pistol, rifle, etc.) indicatewhether a gun is present. This dataset contains 8,000 documents, of which 1,400 were annotated.She started by using AlphaD3M to create models using the 1,400 labeled documents setting themodel search to 1 hour. AlphaD3M derived high-quality pipelines – the best pipeline had 0.90 ofF1. However, she wondered whether these pipelines could be further improved, in particular, byleveraging the 6,600 unlabeled documents through semi-supervised learning. AlphaD3M supportsa wide range of tasks, including semi-supervised learning – users just need to add the keyword“semiSupervised” as a parameter. The user then ran a new experiment using the 1,400 labeled and6,000 unlabeled instances as a training dataset. The results improved from 0.90 to 0.95 of F1. Theseexperiments show that by using AlphaD3M, data scientists can improve the results, pivoting fromone task (classification) to another (semi-supervised classification) very quickly.Reducing pipeline execution time through models exploration. Using content analysis andpredictive modeling for conflict assessment is a common approach for conflict analysts to guidepolicy-making decisions D’Orazio (2020). Consider a conflict analyst trying to categorize explosionevents that involve terrorist activities. She uses the explosion events dataset (Raleigh et al., 2010)that contains 20,000 articles describing events that involve terrorist activities. An article is relevantif it describes attacks involving explosions. To create classification models, she ran AlphaD3M for 1hour. The system synthesized high-quality pipelines, with F1 values around 0.9. To identify themost suitable pipeline, she used the PipelineProfiler to explore the derived models. She observedthat the top-10 pipelines had similar scores but their execution times were above 800 seconds. Toaddress this problem, she tried a different strategy: combining progressive sampling and activelearning to reduce the number of training data from 20,000 to 3,200 documents. Then, she re-ranAlphaD3M using the smaller set as the training dataset, while keeping the rest of the workflowunchanged. The top F1 score improved from 0.91 to 0.96 and the time from 800 to 125 seconds.5 ConclusionsWe introduced AlphaD3M, an MT-AutoML library that automatically synthesizes end-to-endpipelines for 17 ML tasks and 6 different data types. AlphaD3M introduces new methods to auto-matically derive grammars and prioritize primitives, which are essential for effectively managingthe large space MT-AutoML systems must search. In addition, AlphaD3M embraces a user-in-the-loop approach, through an API that allows the users to explore the input data and the derived MLpipelines, as well as customized the pipelines. We presented a detailed experimental evaluationthat compares our approach to several state-of-the-art AutoML systems over different problemsand datasets. The results suggest that AlphaD3M is effective: not only does it solve a larger numberof problem types, but it also derives pipelines with performance that is superior or on par withthose derived by other systems.Although AlphaD3M’s approach is primitive-agnostic, so far, it only relies on the D3M primitivesto build ML pipelines. We plan to extend AlphaD3M by including additional state-of-the-artand more-recent primitives, e.g., models published in HuggingFace or PyTorch Hub repositories.Moreover, we would like to improve the system interoperability with existing open-source primitivesthat use standard APIs such as the well-known scikit-learn’s fit-predict API.Acknowledgements. This work was partially supported by the DARPA D3M program. Anyopinions, findings, conclusions, or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of DARPA.10ReferencesASAM (2021). ASAM: Anti-Shipping Activity Messages. https://msi.nga.mil/Piracy .Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for Hyper-Parameter Opti-mization. In Proceedings of NIPS , pages 2546–2554.Bergstra, J. and Bengio, Y. (2012). Random Search for Hyper-parameter Optimization. JMLR , pages281–305.Cashman, D., Humayoun, S. R., Heimerl, F., Park, K., Das, S., Thompson, J., Saket, B., Mosca, A.,Stasko, J. T., Endert, A., Gleicher, M., and Chang, R. (2018). Visual Analytics for AutomatedModel Discovery. CoRR .D3M (2022). D3M Website. https://datadrivendiscovery.org .D3M Primitives (2022). D3M Primitives Website. https://gitlab.com/datadrivendiscovery/primitives/-/tree/master/primitives .Datamart Profiler Library (2021). Datamart Profiler Website. https://pypi.org/project/datamart-profiler/ .Dolatnia, N., Fern, A., and Fern, X. (2016). Bayesian Optimization with Resource Constraints andProduction. In Proceedings of ICAPS , pages 115–123.D’Orazio, V. (2020). Conflict Forecasting and Prediction. In Oxford Research Encyclopedia ofInternational Studies . Oxford University Press.Drori, I., Krishnamurthy, Y., Lourenco, R., Rampin, R., Cho, K., Silva, C., and Freire, J. (2019).Automatic Machine Learning by Pipeline Synthesis using Model-based Reinforcement Learningand a Grammar. In 6th ICML Workshop on Automated Machine Learning .Elliott, J. (2020). DARPA Data-Driven Discovery of Models (D3M) Program. https://www.darpa.mil/program/data-driven-discovery-of-models .Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 .Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2021). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M.,and Garnett, R., editors, Advances in Neural Information Processing Systems , volume 28. CurranAssociates, Inc.Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren,J. (2022). Amlb: an automl benchmark.Gijsbers, P., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J. (2019). An Open SourceAutoML Benchmark. In 6th ICML Workshop on Automated Machine Learning .Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., Gadewar, S., Yang, Q., and Jahanshad, N.(2019). Towards Human-guided Machine Learning. In Proceedings of the Conference on IntelligentUser Interfaces (IUI) , pages 614–624. ACM.11Google Cloud AutoML (2020). Google Cloud AutoML Website. https://cloud.google.com/automl .Grafberger, S., Guha, S., Stoyanovich, J., and Schelter, S. (2021). MLINSPECT: a Data DistributionDebugger for Machine Learning Pipelines. age, 20:123.Habibi, M., Starlinger, J., and Leser, U. (2020). Tabsim: A Siamese Neural Network for AccurateEstimation of Table Similarity. In 2020 IEEE International Conference on Big Data (Big Data) ,pages 930–937. IEEE.He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778.Hutter, F., Kotthoff, L., and Vanschoren, J. (2019). Automated Machine Learning: Methods, Systems,Challenges . Springer.Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., and Leyton-Brown, K. (2017). Auto-WEKA 2.0:Automatic Model Selection and Hyperparameter Optimization in WEKA. The Journal of MachineLearning Research , 18(1).LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable Automatic Machine Learning. 7th ICMLWorkshop on Automated Machine Learning (AutoML) .Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Ruhkopf, T.,Sass, R., and Hutter, F. (2022). Smac3: A versatile bayesian optimization package for hyperpa-rameter optimization. Journal of Machine Learning Research , 23(54):1–9.Marvin (2020). Marvin Website. https://datadrivendiscovery.org/marvin .Olson, R. S. and Moore, J. H. (2016). TPOT: A Tree-based Pipeline Optimization Tool for AutomatingMachine Learning. In ICML AutoML Workshop , pages 66–74.Ono, J. P., Castelo, S., López, R., Bertini, E., Freire, J., and Silva, C. T. (2021). PipelineProfiler: AVisual Analytics Tool for the Exploration of AutoML Pipelines. IEEE Transactions on Visualizationand Computer Graphics , 27:390–400.Raleigh, C., Linke, A., Hegre, H., and Karlsen, J. (2010). Introducing ACLED: An Armed ConflictLocation and Event Dataset: Special Data Feature. Journal of peace research , 47(5):651–660.Santos, A., Castelo, S., Felix, C., Ono, J. P., Yu, B., Hong, S. R., Silva, C. T., Bertini, E., and Freire,J. (2019). Visus: An Interactive System for Automatic Machine Learning Model Building andCuration. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) , pages1–7. Association for Computing Machinery.Sheskin, D. J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures . crc Press.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,Kumaran, D., Graepel, T., et al. (2017). Mastering Chess and Shogi by Self-Play with a GeneralReinforcement Learning Algorithm. Conference on Neural Information Processing Systems .Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M. M. A., Prabhat,P., and Adams, R. P. (2015). Scalable Bayesian Optimization Using Deep Neural Networks. InProceedings of the ICML , pages 2171–2180.Trabelsi, M., Chen, Z., Zhang, S., Davison, B. D., and Heflin, J. (2022). StruBERT: Structure-awareBERT for Table Search and Matching. arXiv preprint arXiv:2203.14278 .12Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2021). BayesianOptimization is Superior to Random Search for Machine Learning Hyperparameter Tuning:Analysis of the Black-Box Optimization Challenge 2020. CoRR , abs/2104.10201.Wilson, G. T. (2016). Time Series Analysis: Forecasting and Control, 5th Edition. Journal of TimeSeries Analysis , 37(5):709–711.Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015). Learning Hyperparameter OptimizationInitializations. In 2015 IEEE international conference on data science and advanced analytics(DSAA) , pages 1–10. IEEE.13A Broader Impact StatementAlphaD3M can potentially strengthen the efforts in democratizing data science by broadening theapplication of automated predictive pipelines. Subject experts can create their own pipelines andexplore them in the context of an ethical framework. Its interoperable software infrastructureenables external auditing and improves the trust and interpretability of synthesized pipelines.The search space management mechanism also allows efficient resource allocation and helps toprototype pipelines before performing high energy-consuming model training.B Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See it mainly in Section 3 and 4.(b)Did you describe the limitations of your work? [Yes] See Section 5. We also discuss theinfeasibility of AutoML system in general, and our efforts to mitigate limitations.(c)Did you discuss any potential negative societal impacts of your work? [No] However, weadvocate for the necessity of human-in-the-loop to build trust in the generated pipelines.(d)Have you read the ethics review guidelines and ensured that your paper conforms to them?https://automl.cc/ethics-accessibility/ [Yes] Our paper follows these guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We are not includingtheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We are not includingtheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We provide a link to our public GitLab repository and documentationwebpage, where users can find information about the installation and instructions to runour system. The reported evaluation was conducted by a third (independent) party in acompetition among AutoML systems, so we can not release that code.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] See the scripts/paper_automlconference folder in our repository.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Seethescripts/paper_automlconference folder in our repository.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] Our code is well documented and follows codingstandards and best practices. We provide different Jupyter notebook examples and an APIto show how to use AlphaD3M.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [No] We do not specify allthe details.14However, some details, like the data split and search spaces are publicly available in thereferences.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] See Section 4.1.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 4.1.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Wepresented two comparisons (see Section 4). For the first comparison, we used the sameprotocol. For the second one, we used an existing asset and we evaluated our system usingthe same time protocol.(i)Did you compare performance over time? [No] We ran the systems during one hour, atime-bound used by others works (Erickson et al., 2020; Feurer et al., 2021), and reportedthe best score during this time.(j)Did you perform multiple runs of your experiments and report random seeds? [N/A] Wedo not perform multiple runs of our experiments.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not report error bars.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We did notuse surrogate benchmarks.(m)Did you include the total amount of compute and the type of resources used (e.g., typeofgpus, internal cluster, or cloud provider)? [No] Some of the reported evaluations wereconducted by a third party.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] The hyperparameters were automaticallytuned by our AutoML engine.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4.1.(b)Did you mention the license of the assets? [No] However, all assets are publicly availableand the licenses can be retrieved from the references.(c)Did you include any new assets either in the supplemental material or as a url? [Yes] Weincluded a urlto the data used in the experiments.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The assets used in this paper are publicly available.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The data used do not contain personally identifiableinformation neither offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not carry out a user study.15(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not carry out a user study.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not carry out a user study.C Additional DetailsC.1 AlgorithmsAlgorithm 1 describes the process of building the grammar. getVectorTK andgetVectorST repre-sent the BOW and one-hot encoding functions, respectively. The best values empirically calculatedfor the thresholds tsimandtperfare 0.8 and 0.5, respectively.Algorithm 1 Grammar BuilderInput: Marvin datasets D, query dataset q, thresholdtInitializeS=[]// Similar datasetsfordiinDdosimTK =cosineSimilarity(getVectorTK(di),getVectorTK(q))ifsimTK >tsimthensimST =cosineSimilarity(getVectorST(di),getVectorST(q))ifsimST >tsimthenAddditoSInitializeP=calculateADTM(S)InitializeR=[]// Production RulesforpiinPdoifperformance(pi)>tperfthenri=convertToPattern(pi))AddritoRreturnRAlgorithm 2 describes the process of calculating the primitive importance values in detail. Forinstance, the primitive importance values calculated for XGBoost and Random Forrest are 0.62 and0.56, whereas for Nearest Centroid and K-Nearest Neighbors the values are 0.46 and 0.44. It showsthat the importance values can be used as an indicator to prioritize the usage of primitives.Algorithm 2 Primitives ImportanceInput: PipelinesP, PatternsTInitializeR=getPrimitives(P)InitializeG,L=[]// Global and Local correlationsforriinRdopc=PearsonCorrelation (ri,P)npc=normalize(pc)AddnpctoGfortiinTdopi=getPipelines(ti,P)R=getPrimitives(ti,pi)forriinRdopc=PearsonCorrelation (ri,R)npc=normalize(pc)AddnpctoLreturn(G,L)16C.2 GrammarsDifferent tasks require different grammars. For instance, the algorithms needed to solve time-series and semi-supervised classification problems have a different structure and use a differentset of primitives. Consequently, specialized grammars and production rules are needed for eachtask. Manually creating these grammars is time-consuming and error-prone, and relying on thesegrammars can limit the effectiveness of the AutoML systems with respect to problem coverage andquality of the derived pipelines.Figure 5 shows an excerpt of a grammar automatically generated in AlphaD3M to solve classi-fication problems. The start symbol ( S) is the starting point from which all the production rulescan be derived. In the grammar, the terminal ‘primitive’ can be any of the available algorithms inAlphaD3M, and ‘E’represents the empty symbol.S ::= CATEGORICAL_ENCODER TEXT_FEATURIZER DATA_CONVERSION IMPUTATION CLASSIFICATIONS ::= TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER DIMENSIONALITY_REDUCTION CLASSIFICATIONS ::= DATA_STRUCTURE_ALIGNMENT IMPUTATION CLASSIFICATIONS ::= IMPUTATION FEATURE_SCALING CLASSIFICATIONS ::= IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION DIMENSIONALITY_REDUCTION CLASSIFICATIONIMPUTATION ::= 'primitive '|'E'CATEGORICAL_ENCODER ::= 'primitive '|'E'FEATURE_SCALING ::= 'primitive '|'E'FEATURE_SELECTION ::= 'primitive '|'E'DIMENSIONALITY_REDUCTION ::= 'primitive '|'E'DATA_CONVERSION ::= 'primitive 'TEXT_FEATURIZER ::= 'primitive 'DATA_STRUCTURE_ALIGNMENT ::= 'primitive 'CLASSIFICATION ::= 'primitive 'Figure 5: Excerpt of a grammar automatically generated by AlphaD3M for classification tasksIn Figure 6, you can see the manual grammar used in the experiments. This grammar wasproposed by Drori et al. (2019). To generate this grammar for classification and regression tabulartasks, a developer was asked to review manually the primitives to group them into categories. Forinstance, the primitives decision _tree.SKlearn andrandom _forest.SKlearn were grouped into thecategory ‘CLASSIFICATION’. Then, using his knowledge in ML, he created the production rules ofthe grammar using these categories.S ::= CLASSIFICATION_TASK | REGRESSION_TASKCLASSIFICATION_TASK ::= CLASSIFICATION | DATA_CLEANING CLASSIFICATION | DATA_TRANSFORMATION CLASSIFICATION |DATA_CLEANING DATA_TRANSFORMATION CLASSIFICATIONREGRESSION_TASK ::= REGRESSION | DATA_CLEANING REGRESSION | DATA_TRANSFORMATION REGRESSION |DATA_CLEANING DATA_TRANSFORMATION REGRESSIONCLASSIFICATION ::= 'primitive 'REGRESSION ::= 'primitive 'DATA_CLEANING ::= 'primitive 'DATA_CLEANING | 'E'DATA_TRANSFORMATION ::= 'primitive 'DATA_TRANSFORMATION | 'E'Figure 6: Manual GrammarC.3 ExperimentsIn Table 4, we can see the scores obtained by all AutoML systems developed in the D3M program,including a majority voting ensemble system, on a collection of 112 datasets2. This collection17contains challenging datasets that go beyond the simple tabular data and cover a wide variety oftasks and data types.Table 4: Scores obtained by AlphaD3M and the other AutoML systems developed in the D3M program.Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori124_120_mnist_8747 0.98 0.94 0.46 0.18 0.94 0.11 - -124_138_cifar100_1858 0.67 0.48 0.42 0.12 0.48 0.01 - -124_16_fashion_mnist 0.90 0.83 0.84 0.12 0.85 0.10 - -124_174_cifar10_MIN 0.88 0.82 0.84 0.27 0.80 0.10 - -124_188_usps_MIN 0.96 0.95 0.94 0.26 0.92 0.18 0.11 -124_214_coil20_MIN 0.99 0.99 0.99 0.85 0.97 - - -124_95_uc_merced_land_use_MIN 0.90 - 0.72 0.52 - 0.05 0.33 -1491_one_hundred_plants_margin_MIN 0.80 0.79 0.88 0.92 0.75 0.83 0.81 0.831567_poker_hand_MIN 0.90 0.84 0.28 0.48 0.12 0.13 - 0.27185_baseball_MIN 0.66 0.70 0.65 0.68 0.68 0.67 0.66 0.64196_autoMpg_MIN 6.57 9.12 5.74 11.95 7.49 6.01 15.36 7.0322_handgeometry_MIN 0.24 0.23 0.23 0.14 0.80 0.36 0.36 -26_radon_seed_MIN 0.02 0.02 0.24 0.03 0.02 0.06 1.40 0.0227_wordLevels_MIN 0.32 0.28 0.28 0.32 0.29 0.27 0.26 0.27299_libras_move_MIN 0.98 - - 0.48 - - 0.98 0.9730_personae_MIN 0.62 0.65 0.65 0.62 0.61 0.55 0.61 -313_spectrometer_MIN 0.43 0.37 0.37 0.30 0.32 0.33 0.23 0.4031_urbansound_MIN 0.93 0.93 0.91 0.75 0.92 0.77 0.49 -32_fma_MIN 0.55 0.57 0.34 0.28 - 0.11 0.11 -32_wikiqa_MIN 0.00 0.02 0.14 0.13 0.50 - 0.13 -38_sick_MIN 1.00 1.00 - 1.00 - - 0.49 1.004550_MiceProtein_MIN 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.0049_facebook_MIN 0.88 0.87 0.87 0.87 0.87 0.88 0.44 -534_cps_85_wages_MIN 20.11 20.35 22.07 23.15 24.86 21.44 - 20.7056_sunspots_MIN 34.55 11.82 8.64 8.45 58.30 9.40 90.60 -56_sunspots_monthly_MIN 64.61 41.18 46.86 41.04 - 62.20 27.74 -57_hypothyroid_MIN 0.96 0.98 0.99 0.98 0.74 0.99 0.97 0.9859_LP_karate_MIN 0.93 0.45 0.83 0.83 0.45 0.45 0.93 -59_umls_MIN 0.92 0.94 0.94 0.94 0.94 0.70 0.73 -60_jester_MIN 4.25 - 4.24 4.15 - 4.51 - -66_chlorineConcentration_MIN 0.82 0.86 0.81 0.52 0.78 0.68 0.23 -6_70_com_amazon_MIN 0.85 0.85 - 0.85 0.85 - - -6_86_com_DBLP_MIN 0.72 0.72 - 0.72 0.72 - - -JIDO_SOHR_Articles_1061 0.98 0.94 0.94 0.81 0.56 0.60 0.64 -JIDO_SOHR_Tab_Articles_8569 1.00 0.99 1.00 1.00 0.56 1.00 1.00 -LL0_1100_popularkids_MIN 0.42 0.45 0.38 0.38 0.40 0.44 - 0.47LL0_186_braziltourism_MIN 0.14 0.35 0.36 0.17 0.24 0.20 0.34 0.16LL0_207_autoPrice_MIN 4.89·1065.76·1066.04·1063.76·1075.36·1065.43·1061.56·1085.81·106LL0_acled_reduced_MIN 0.83 0.88 0.89 0.84 0.91 0.85 0.74 0.91LL0_jido_reduced_MIN 0.90 0.89 0.91 0.90 0.90 0.90 - 0.90LL1_2734_CLIR 0.88 0.50 0.52 0.88 - - 0.50 -LL1_336_MS_Geolife_transport_MIN 0.60 1.00 0.99 - 0.85 - 0.98 -LL1_336_MS_Geolife_transport_separate 0.67 1.00 0.99 - 0.86 - 0.99 -LL1_3476_HMDB_actio_recognition_MIN 0.11 1.00 0.90 0.11 - 0.48 0.08 -LL1_50words_MIN 0.35 0.55 0.56 0.41 0.51 0.45 0.35 -LL1_726_TIDY_GPS_carpool 0.54 0.58 0.58 0.46 0.59 - 0.63 -LL1_736_population_spawn_MIN 1636.12 1806.40 1804.76 1644.26 - 2845.89 - -LL1_736_population_spawn_simpler_MIN 1346.10 1490.15 3669.54 1347.65 1323.72 1550.40 19887.20 -LL1_736_stock_market_MIN 7.64 1.49 8.69 1.75 - 30.66 - -LL1_ACLED_TOR_online_behavior_MIN 0.40 0.05 0.44 0.64 0.43 0.66 0.08 0.40LL1_Adiac_MIN 0.75 0.70 0.73 0.54 0.67 0.70 0.49 -LL1_ArrowHead_MIN 0.75 0.82 0.78 0.72 0.65 0.55 0.72 -LL1_CONFLICT_3457_atrocity 9.53 6.75 11.43 12.84 - 17.21 13.91 -LL1_Cricket_Y_MIN 0.52 0.54 0.59 0.52 0.62 0.53 0.45 -LL1_DIC28_net_MIN 0.84 0.80 0.80 0.80 0.80 0.84 - -LL1_ECG200_MIN 0.90 0.87 0.87 0.86 0.91 0.85 0.86 -LL1_EDGELIST_net_nomination_MIN 0.99 0.66 0.85 0.94 0.66 0.35 0.84 -LL1_ElectricDevices_MIN 0.54 0.42 0.46 0.06 0.44 0.27 0.31 -LL1_FISH_MIN 0.80 0.87 0.89 0.73 0.84 0.86 0.78 -LL1_FaceFour_MIN 0.84 0.83 0.71 0.55 0.65 0.40 0.66 -18(Table 4: Continued from the previous page)Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriLL1_GS_process_classification_tab_MIN 0.80 0.80 0.80 0.80 0.80 0.73 - 0.81LL1_GS_process_classification_text_MIN 0.65 0.80 0.65 0.80 0.80 0.76 0.80 -LL1_GT_actor_group_association_MIN 0.25 0.13 0.17 0.13 - - - -LL1_HandOutlines_MIN 0.89 0.91 0.90 0.88 0.88 0.88 0.88 -LL1_Haptics_MIN 0.43 0.42 0.44 0.42 0.41 0.45 0.42 -LL1_ItalyPowerDemand_MIN 0.93 0.95 0.95 0.95 0.95 0.91 0.90 -LL1_MIL_MUSK 0.68 0.77 0.83 0.67 0.80 0.80 - 0.72LL1_MIL_Mutagenesis 0.80 0.73 0.72 0.71 0.70 0.63 - 0.79LL1_MITLL_synthetic_vora_E_2538 0.29 0.53 0.52 0.50 0.31 0.44 - 0.38LL1_Meat_MIN 0.95 0.94 0.88 0.92 0.88 0.17 0.95 -LL1_OSULeaf_MIN 0.53 0.44 0.52 0.77 0.45 0.47 0.32 -LL1_PHEM_Monthly_Malnutrition_MIN 10.63 9.56 9.39 9.73 - 12.18 - -LL1_PHEM_weekly_malnutrition_MIN 3.34 4.32 3.45 2.94 - 4.23 4.18 -LL1_TXT_CLS_3746_newsgroup_MIN 0.60 0.46 0.55 0.48 0.60 0.45 0.23 -LL1_TXT_CLS_SST_Binary 0.73 0.82 0.82 0.55 - 0.51 0.53 -LL1_TXT_CLS_airline_opinion_MIN 0.81 0.80 0.81 0.80 0.81 0.72 0.72 -LL1_TXT_CLS_apple_products_sent_MIN 0.73 0.71 0.72 0.72 0.73 0.66 0.69 -LL1_VID_UCF11_MIN 0.99 0.99 0.25 0.27 - 0.02 0.08 -LL1_VTXC_1343_cora_MIN 0.61 0.04 0.22 0.17 0.04 0.13 0.52 -LL1_VTXC_1369_synthetic_MIN 0.95 0.22 0.33 0.21 0.22 0.19 0.48 -LL1_ViEWS_CM_S1 0.69 1.20 0.90 0.72 0.75 2.52 - 0.82LL1_ViEWS_PGM_S1 0.02 0.04 0.02 - 0.02 0.02 0.30 0.02LL1_bigearth_landuse_detection 0.90 0.96 0.76 0.65 0.21 - - -LL1_bn_fly_drosophila_medulla_net_MIN 0.24 0.24 - - - 0.19 - -LL1_h1b_visa_apps_7480 0.44 0.47 0.43 0.44 0.41 0.41 0.47 0.42LL1_net_nomination_seed_MIN 0.99 0.99 0.96 0.94 0.99 0.34 0.46 -LL1_penn_fudan_pedestrian_MIN 0.94 0.94 - 0.94 0.94 - - -LL1_retail_sales_total_MIN 1989.19 1921.54 1941.06 1966.30 1992.17 - 1971.76 2022.41LL1_terra_canopy_height_s4_100_MIN 113.04 68.44 39.02 52.21 - 79.86 343.27 -LL1_terra_canopy_height_s4_70_MIN 104.92 547.94 126.06 136.32 - 169.63 136.98 -LL1_terra_canopy_height_s4_80_MIN 112.95 92.95 32.57 74.59 - 111.49 74.54 -LL1_terra_canopy_height_s4_90_MIN 117.13 85.73 35.12 60.44 - 104.49 60.45 -LL1_terra_leaf_angle_mean_s4_MIN 0.04 0.09 0.05 0.04 - - 0.05 -LL1_tidy_terra_panicle_detection_MIN 0.01 0.03 - - - - - -SEMI_1040_sylva_prior_MIN 0.93 0.90 0.93 - 0.92 - - -SEMI_1044_eye_movements_MIN 0.52 0.57 0.61 0.55 0.60 0.53 0.54 -SEMI_1053_jm1_MIN 0.26 1.00 0.16 - 0.16 0.41 - -SEMI_1217_click_prediction_small_MIN 0.04 0.03 0.04 - 0.17 - - -SEMI_1459_artificial_characters_MIN 0.68 0.99 0.83 0.99 0.67 0.61 0.52 -SEMI_155_pokerhand_MIN 0.58 0.66 0.60 0.05 0.64 0.50 0.51 -kaggle_music_hackathon_MIN 21.88 17.56 19.64 24.24 21.79 - - 21.85loan_status_MIN 0.40 0.50 0.51 0.44 0.33 - 0.48 0.46political_instability_MIN 0.81 0.89 0.89 0.89 0.89 - 0.88 -uu1_datasmash_MIN 1.00 1.00 1.00 1.00 0.61 1.00 1.00 -uu2_gp_hyperparameter_estimation_MIN 0.89 0.88 0.57 0.89 - - - 0.89uu3_world_development_indicators_MIN 2.39·10105.54·10124.12·1012-4.40·1012- - -uu3_world_development_indicators_raw 7.83·10131.04·10125.22·1011- - - - -uu4_SPECT_MIN 0.00 0.92 0.92 0.90 0.89 0.90 0.78 -uu5_heartstatlog_MIN 0.70 0.69 0.72 0.62 0.61 0.72 0.67 -uu6_hepatitis_MIN 0.00 0.47 0.89 0.40 0.27 0.31 0.44 -uu7_pima_diabetes_MIN 0.59 0.57 0.60 0.57 0.60 0.63 0.57 -uu_101_object_categories_MIN 0.95 0.89 0.84 0.34 - 0.10 - -19The average rank values obtained by different AutoML systems for each task type in the D3Mdatasets can be seen in Table 5. These datasets contain a total of 17 unique ML tasks.Table 5: Average rank values by task obtained by different AutoML systems.Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriImage Classification 1.11 2.78 2.78 4.56 4.33 6.22 7.44 8.00Tabular Classification 3.75 3.30 3.35 3.85 4.85 4.65 5.85 3.55Tabular Regression 2.27 3.18 3.00 5.73 4.27 5.73 7.54 4.36Image Regression 4.00 2.00 2.00 1.00 7.00 5.00 5.00 8.00Text Classification 2.56 3.33 2.22 3.00 3.56 5.78 4.33 8.00Audio Classification 1.50 1.00 3.50 5.00 5.50 5.00 6.00 8.00Graph Matching 1.00 3.33 3.00 2.33 4.67 3.33 6.33 8.00Time series Forecasting 3.38 3.62 2.62 2.23 7.31 5.08 5.08 8.00Link Prediction 3.33 2.33 2.33 1.67 4.67 6.67 5.00 8.00Collaborative Filtering 3.00 8.00 2.00 1.00 8.00 4.00 8.00 8.00Time series Classification 3.26 2.26 2.16 4.68 3.79 5.32 4.53 8.00Community Detection 1.00 1.00 8.00 3.33 3.33 6.33 8.00 8.00Video Classification 2.50 1.00 3.00 3.50 8.00 4.50 5.50 8.00Vertex Classification 1.00 4.00 3.25 4.25 4.00 6.50 3.50 8.00Object Detection 1.50 1.00 8.00 4.50 4.50 8.00 8.00 8.00Semisupervised Classification 3.50 2.33 2.33 6.00 2.83 6.00 6.83 8.00LUPI 5.25 3.00 1.25 4.50 5.00 2.50 4.75 8.0020Table 6 and Table 7 show the raw and normalized scores (normalized by the best score) obtainedby each system on the 39 datasets of the OpenML AutoML Benchmark (Gijsbers et al., 2019).This benchmark represents real-world data science problems and covers binary and multiclassclassification tasks. Additionally, Table 6 shows the gain of AlphaD3M regarding the other systems.Table 6: Raw scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3M Gaintask_10101 0.76 0.76 0.76 0.76 0.76 0.79 0.03task_12 0.98 0.98 0.98 0.98 - 0.96 -0.01task_146195 0.88 0.71 0.86 0.88 0.85 0.81 -0.03task_146212 1.00 1.00 1.00 1.00 1.00 1.00 0.00task_146606 0.74 0.60 0.73 0.72 - 0.73 0.03task_146818 0.91 0.86 0.84 0.90 0.87 0.87 -0.01task_146821 0.99 1.00 1.00 1.00 1.00 0.97 -0.03task_146822 0.97 0.97 0.97 0.97 0.98 0.97 0.00task_146825 0.91 - 0.91 0.90 - 0.86 -0.05task_14965 0.91 0.88 0.91 0.91 0.91 0.91 0.00task_167119 0.92 0.80 0.94 0.96 0.90 0.83 -0.08task_167120 0.51 0.51 0.51 0.51 - 0.51 -0.00task_168329 0.40 0.27 0.38 0.35 0.35 0.37 0.02task_168330 0.73 0.65 0.73 0.73 0.70 0.72 0.01task_168331 0.73 0.62 0.73 0.69 0.66 0.66 -0.02task_168332 0.56 - 0.54 0.51 0.44 0.41 -0.10task_168335 0.94 - 0.94 - 0.93 0.94 -0.00task_168337 0.84 - 0.86 0.83 0.77 0.61 -0.21task_168338 1.00 - 1.00 1.00 0.99 0.97 -0.03task_168868 0.99 0.99 0.99 1.00 0.99 0.99 0.00task_168908 0.74 0.73 0.76 0.72 - 0.77 0.03task_168909 0.99 0.96 0.99 0.98 - 0.99 0.01task_168910 0.72 0.60 0.72 0.72 0.71 0.65 -0.04task_168911 0.81 0.82 0.82 0.82 0.81 0.81 -0.01task_168912 0.93 0.92 0.95 0.95 0.95 0.94 -0.00task_189354 0.67 - 0.67 0.61 0.67 0.65 -0.01task_189355 0.94 - 0.00 - - 0.88 0.41task_189356 0.71 - 0.69 - - - -task_3 0.99 0.93 0.99 1.00 0.99 0.99 0.01task_31 0.77 0.66 0.82 - 0.82 0.77 0.00task_34539 0.95 - 0.95 0.95 0.95 0.95 -0.01task_3917 0.87 - 0.86 - 0.88 0.86 -0.01task_3945 0.98 - 0.98 0.98 0.98 0.98 0.00task_53 0.86 0.67 0.85 0.88 - 0.82 0.01task_7592 0.87 0.87 0.87 0.86 0.87 0.87 0.00task_7593 0.97 0.66 0.96 0.80 - 0.95 0.10task_9952 0.88 0.91 0.90 0.90 0.91 0.91 0.01task_9977 0.98 0.95 0.97 0.98 0.97 0.96 -0.00task_9981 0.94 0.86 0.96 0.94 0.96 0.94 0.0121Table 7: Normalized scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3Mtask_10101 0.97 0.97 0.97 0.97 0.97 1.00task_12 0.99 1.00 0.99 0.99 - 0.98task_146195 1.00 0.81 0.98 1.00 0.97 0.92task_146212 1.00 1.00 1.00 1.00 1.00 1.00task_146606 1.00 0.82 1.00 0.98 - 0.99task_146818 1.00 0.94 0.92 0.98 0.95 0.95task_146821 0.99 1.00 1.00 1.00 1.00 0.97task_146822 1.00 0.99 1.00 1.00 1.00 1.00task_146825 1.00 - 0.99 0.99 - 0.94task_14965 1.00 0.96 1.00 1.00 1.00 1.00task_167119 0.96 0.83 0.98 1.00 0.94 0.86task_167120 1.00 1.00 1.00 0.99 - 0.99task_168329 1.00 0.69 0.96 0.88 0.89 0.94task_168330 1.00 0.89 1.00 1.00 0.97 0.98task_168331 1.00 0.84 1.00 0.95 0.90 0.91task_168332 1.00 - 0.98 0.93 0.80 0.75task_168335 1.00 - 1.00 - 0.99 0.99task_168337 0.98 - 1.00 0.97 0.89 0.71task_168338 1.00 - 1.00 1.00 0.99 0.97task_168868 1.00 0.99 1.00 1.00 1.00 1.00task_168908 0.97 0.96 0.99 0.94 - 1.00task_168909 1.00 0.97 1.00 0.99 - 1.00task_168910 1.00 0.83 1.00 1.00 0.98 0.90task_168911 0.99 1.00 1.00 1.00 0.99 0.98task_168912 0.98 0.97 0.99 1.00 1.00 0.98task_189354 1.00 - 1.00 0.91 1.00 0.96task_189355 1.00 - 0.00 - - 0.94task_189356 1.00 - 0.97 - - -task_3 1.00 0.94 1.00 1.00 1.00 1.00task_31 0.94 0.80 1.00 - 1.00 0.94task_34539 1.00 - 1.00 1.00 0.99 0.99task_3917 0.99 - 0.98 - 1.00 0.98task_3945 1.00 - 1.00 0.99 1.00 1.00task_53 0.97 0.76 0.96 1.00 - 0.93task_7592 1.00 0.99 1.00 0.99 1.00 1.00task_7593 1.00 0.68 0.99 0.82 - 0.97task_9952 0.96 0.99 0.98 0.98 1.00 0.99task_9977 1.00 0.97 1.00 1.00 1.00 0.99task_9981 0.98 0.89 1.00 0.98 1.00 0.9822
B4uCPaPLPQr
71eJdMzCCIi
automl.cc/AutoML/2023/ABCD_Track
2023
AlphaD3M: An Open-Source AutoML Library for Multiple ML Tasks
["Roque Lopez", "Raoni Lourenco", "Remi Rampin", "Sonia Castelo", "A\u00e9cio S. R. Santos", "Jorge Henrique Piazentin Ono", "Claudio Silva", "Juliana Freire"]
We present AlphaD3M, an open-source Python library that supports a wide range of machine learning tasks over different data types. We discuss the challenges involved in supporting multiple tasks and how AlphaD3M addresses them by combining deep reinforcement learning and meta-learning to effectively construct pipelines over a large collection of primitives. To better integrate the use of AutoML within the data science lifecycle, we have built an ecosystem of tools around AlphaD3M that support user-in-the loop tasks, including the selection of suitable pipelines and the development of solutions for complex systems. We present use cases that demonstrate some of these features. We report the results of detailed experimental evaluations which show that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance that is comparable or superior to state-of-the-art AutoML systems.
["AutoML", "Python Library", "Multiple ML Tasks"]
AlphaD3M: An Open-Source AutoML Libraryfor Multiple ML TasksRoque Lopez1Raoni Lourenço2Remi Rampin1Sonia Castelo1Aécio Santos1Jorge Ono1Claudio Silva1Juliana Freire11New York University2University of LuxembourgAbstract We present AlphaD3M, an open-source Python library that supports a wide range of machinelearning tasks over different data types. We discuss the challenges involved in supportingmultiple tasks and how AlphaD3M addresses them by combining deep reinforcement learningand meta-learning to construct pipelines over a large collection of primitives effectively.To better integrate the use of AutoML within the data science lifecycle, we have builtan ecosystem of tools around AlphaD3M that support user-in-the-loop tasks, includingselecting suitable pipelines and developing custom solutions for complex problems. Wepresent use cases that demonstrate some of these features. We report the results of adetailed experimental evaluation showing that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance comparable or superior tostate-of-the-art AutoML systems.1 IntroductionAutomated Machine Learning (AutoML) has emerged as an alternative to automatically synthesizemachine learning (ML) pipelines, thereby democratizing ML techniques to non-experts as wellas increasing the productivity of data scientists. Different approaches have been proposed forAutoML systems. Some focus on specific components of an ML pipeline, such as hyperparameteroptimization or model selection, while others, given a dataset and a prediction task, generateend-to-end pipelines that encompass data pre-processing, feature, and model selection (Hutteret al., 2019). Most end-to-end systems are designed to work with tabular data and only supportclassification and regression problems (Feurer et al., 2015; LeDell and Poirier, 2020; Olson and Moore,2016; Kotthoff et al., 2017). Cloud AutoML (Google Cloud AutoML, 2020) and AutoGluon (Ericksonet al., 2020) also create pipelines to classify text and images and perform object detection tasks.However, these systems do not support more complex data types such as graphs, time series, audio,and video, limiting the types of problems they can address. Table 1 shows the set of task typessupported by different AutoML systems.In the context of DARPA’s Data-Driven Discovery of Models (D3M) program (Elliott, 2020),several AutoML systems have been developed to support a wide range of data types and MLtasks using an extensive set of computational primitives as building blocks – we refer to theseasmulti-task AutoML systems (MT-AutoML). MT-AutoML systems face an essential challenge:effectively searching an ample space of primitives required to synthesize pipelines for a broadrange of tasks and data types. To prune the search space, many D3M MT-AutoML systems usemanually-crafted templates and grammars (D3M, 2022) that prescribe combinations of primitivesthat make sense for different problems. This, in turn, leads to other challenges: creating thesetemplates or grammars is not only time-consuming but failing to include the necessary rules thatcover the relevant primitives (and their combination) for multiple task types can negatively impactthe ability of an MT-AutoML system to derive performant pipelines.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Table 1: Tasks supported by different AutoML Systems.SystemsTabularClassificationTextclassificationImageclassificationAudioclassificationVideoclassificationTabularRegressionClusteringTime seriesforecastingTime seriesclassificationObjectdetectionLUPICommunitydetectionLinkpredictionGraphmatchingVertexclassificationCollaborativefilteringSemisupervisedclassificationAutoGluon ✓✓✓ ✓ ✓ ✓AutoWEKA ✓ ✓Auto-Sklearn ✓ ✓Cloud AutoML ✓✓✓ ✓✓ ✓H2O ✓✓ ✓TPOT ✓ ✓AlphaD3M ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓ ✓✓✓ ✓✓✓✓✓✓ ✓✓✓We present AlphaD3M, an open-source AutoML library1that supports a wide range of dataand problem types (see Table 1). AlphaD3M introduces new techniques to navigate the large searchspaces MT-AutoML systems must navigate effectively. They include an algorithm that appliesmeta-learning to automatically derive task-based context-free grammars (CFGs) which cover amultitude of problems; and a novel search strategy that, based on previously generated pipelinesand their performance, prioritizes primitives that are correlated with good pipeline performance.AlphaD3M includes components that aim to support usability and integration with other tasksin the data science lifecycle, from data exploration and model summarization to model deployment.It is possible to extend AlphaD3M and combine it with other tools through its flexible API. Forexample, its integration with the PipelineProfile (Ono et al., 2021) allows users to explore andcompare the set of derived pipelines visually. Besides describing the API and these components, wealso present case studies demonstrating how users can improve the ML solutions via interaction inAlphaD3M.We conducted a detailed experimental evaluation to assess the ability of AlphaD3M to handlea rich set of tasks and data types as well as to compare its performance against state-of-the-artAutoML and MT-AutoML systems. We used two benchmarks: (a) a collection of 112 datasetsthat covers seventeen different ML tasks, and (b) the OpenML AutoML Benchmark for tabularclassification problems. Our results show that the search strategies used by AlphaD3M are effective:the system generates pipelines whose performance is superior or on par with those derived byother systems, including systems that focus on a small set of problems and have to navigate a muchsmaller search space.2 Related WorkTask Coverage. Many AutoML systems have been proposed to work with tabular data, for example:Auto-sklearn (Feurer et al., 2015), TPOT (Olson and Moore, 2016), and H2O (LeDell and Poirier,2020). The deep reinforcement learning algorithm proposed by Drori et al. (2019) aimed to supportmultiple learning tasks and data types, however, its implementation was limited to classificationand regression tasks over tabular and text data. AutoML systems developed in industry, such asCloud AutoML by Google and AutoGluon by Amazon, handle text and image data, but still supporta limited number of learning tasks. In contrast, AlphaD3M supports a wide range of data types(tabular, text, images, audio, video, and graph) and a rich set of ML tasks as shown in Table 1.Data and Model Exploration. Interactive data analytics systems such as Visus (Santos et al., 2019),TwoRavens (Gil et al., 2019), and Snowcat (Cashman et al., 2018) have been developed to guideusers throughout the model-building process, from exploring the input data to comparing the MLpipelines produced by AutoML systems. They target primarily domain experts who have little or1https://gitlab.com/ViDA-NYU/d3m/alphad3m2no expertise in ML and thus lack support for the customization of pipelines for complex problems.These systems trade off flexibility for ease of use. As such, they are limited to the operationsimplemented in their visual interfaces; extensive and time-consuming changes in their workflowsare required to support new data types and tasks (e.g., graph data). Other approaches mimic theinterface of traditional ML libraries, through which developers often build a single solution for agiven task (Grafberger et al., 2021). AlphaD3M allows ML experts to explore the derived pipelinesand customize them through a user-friendly interface within a Jupyter Notebook environment. Inaddition, instead of retrieving only the best pipeline, AlphaD3M returns all valid pipelines, ranks,and presents them to the user for comparison, refinement, and selection.3 The AlphaD3M LibraryFigure 1: Overview of AlphaD3M.AlphaD3M is a multi-task Au-toML system. It is imple-mented in Python and canbe used via pipinstallationor Docker. Figure 1 showsan overview of this libraryand its components. Tobuild ML pipelines, AlphaD3Muses a rich set of primitivesand a meta-learning databasefrom the D3M ecosystem D3M(2022). The pipeline search is conducted by four modules which: (a) automatically construct oftask-specific grammars; (b) prioritize primitives that are more likely to be effective; (c) synthesizepipelines using Monte Carlo Tree Search and Neural Networks (Drori et al., 2019); and (d) tunehyperparameters. The library implements a Python API through which users can define the problemto be solved, explore the input data, obtain model summaries, analyze and compare the producedpipelines, as well as improve and deploy them.3.1 The D3M EcosystemPrimitives. AlphaD3M uses a comprehensive collection of primitives developed by performersin the D3M program as well as from open-source libraries (e.g., scikit-learn). In total, there are312 primitives available for different steps in ML pipelines, including data pre-processing, featureextraction, feature selection, prediction, and clustering (D3M Primitives, 2022), and implementstate-of-the-art methods, such as ResNet50 (He et al., 2016), ARIMA (Wilson, 2016), among others.The Marvin Meta-Learning Database. Marvin is an open corpus of curated ML pipelines, datasets,and problems (Marvin, 2020). All pipelines in Marvin share the same set of primitives and arespecified using the D3M format. Marvin stores approximately 2.5 million pipelines executed over600 datasets. Since data scientists and AutoML systems that use different search strategies haveproduced these pipelines, the database covers a wide variety of pipeline patterns. As discussedbelow, we leverage the data in Marvin to assist in and improve the AlphaD3M search process. Tothe best of our knowledge, ours is the first work that explores this corpus.3.2 Pipeline SearchThe automatic synthesis of pipelines is a combinatorial problem in which we must find the bestcombinations of primitives and their hyperparameters. With 312 primitives and over 1,500 hy-perparameters in the D3M ecosystem, the search space becomes prohibitively large. For instance,considering just the classification task over tabular data, there are 22 data cleaning, 87 data trans-formation, and 44 classifier primitives, leading to 84,216 possible pipelines to test. AlphaD3M usesa multi-pronged approach to manage this search space described below.3APipeline Synthesis Using Monte Carlo Tree Search and Neural Networks. To synthesize theML pipelines, AlphaD3M uses the strategy introduced by Drori et al. (2019), which is based on asingle-player game technique inspired by AlphaZero (Silver et al., 2017). It applies model-basedreinforcement learning with a neural network sequence model, and a Monte Carlo Tree Search(MCTS). The metadata encoding the pipeline, the dataset, and the task are analogous to an entiregame board configuration in AlphaZero. The possible game states consist of all valid pipelinesgenerated from a set of primitives and modified by actions guided by a manually-designed CFG.The model outputs a sequence of primitives. Pipelines are constructed by an LSTM. Given a state scomposed of a vector encoding the whole board configuration (dataset, task, pipeline), the neuralnetwork predicts the probabilities P(s,a)over actions afrom a state s. This process produces aset of action sequences Sthat describe a pipeline, which in turn solves task Ton datasetD. Thenetwork also outputs an estimate of pipeline performance v. The reinforcement learning algorithmtakes the predictions (P(s,a),v(s))produced be the neural network and uses them in the MCTS byrunning multiple simulations to search for the pipeline sequence Rwith the best evaluation. Animportant benefit of this strategy is that it learns to synthesize pipelines.BAutomatic Generation of Task-Based CFG via Meta-Learning. Manually designed CFGs havemany limitations, notably they may not cover all applicable rules and pipeline structures andconsequently prevent the search process from exploring desirable pipelines that do not fit thegrammar. Furthermore, to create the production rules or patterns in the grammar, a user needsto have knowledge of all the available primitives for a specific task and how they work. For largeprimitive collections, this is a difficult task, which is compounded for MT-AutoML systems thatsupport multiple problem types. Instead of relying on manually created CFGs, we propose a newstrategy that uses meta-learning to derive grammars automatically and on the fly. It does so in twosteps: 1) it selects task-specific pipelines and datasets from a meta-learning database (MLDB), and2) uses these to derive a portfolio of pipeline patterns.Selecting Task-Oriented Datasets. Since AlphaD3M supports different tasks, we need to retrievefrom the Marvin MLDB pipelines produced for tasks and datasets similar to the ones we provided asinputs to the AutoML system. For instance, if we want to solve a clustering problem over a datasetD, we retrieve the pipelines used for this problem over datasets similar to D. To select relevantpipelines for a given problem Pover dataset D, we use the “task keywords" tag list provided in theproblem definition as features that describe the task to be solved, and search Marvin for pipelinesthat contain a similar set of keywords. The list is encoded as a bag-of-words (BOW). Since the setis small and most of the tags are non-standard words, e.g., collaborativeFiltering, timeSeries , it ispossible to obtain accurate matches with this simple approach.Given the set of relevant pipelines RP, we select a subset RPDcontaining pipelines that wereapplied on datasets similar to D. To determine whether two datasets are similar, we use datasetfeatures including semantic types (e.g., categorical, date-time) and missing values, and encode themusing one-hot encoding. Datasets are compared using cosine similarity.The current implementation uses 16 unique semantic types detected by the data-mart_profiler (Datamart Profiler Library, 2021). In contrast to other approaches like TabSim(Habibi et al., 2020), or StruBERT (Trabelsi et al., 2022), AlphaD3M uses semantic types because, inthe grammar, it defines components to handle the dataset’s features, such as categorical or date-timeencoders, and these components are strongly related to semantic types. Also, these approachesfocus on tabular datasets, AlphaD3M handles other types of datasets, like image and text datasets.Finally, running these approaches is a very time-consuming task.Creating a Portfolio of Patterns. After identifying similar datasets, the next step is to select the bestpipelines to create a portfolio of pipeline patterns. To select these AlphaD3M takes into considerationpipeline performance for different datasets. Some datasets are more challenging than others – theperformance of a pipeline can vary widely for different datasets. To properly compare pipeline4performance, AlphaD3M uses a strategy based on the average distance to minimum (ADTM) (Wistubaet al., 2015), which transforms the performance to the distance to the best-observed performancescaled between 0 and 1. In contrast to ADTM, which uses the misclassification rate, AlphaD3Muses the actual performance (the score) of the pipelines and thus, it applies the average distance tomaximum instead to select the best pipelines. It then transforms the primitives within the pipelinesto their classes. For instance, the primitive imputer.SKlearn belongs to the class IMPUTATION . Ifthere is a pipeline with this structure: [ imputer.SKlearn svm.SKlearn ], it is converted to this pattern:[IMPUTATION CLASSIFICATION ]. Unlike Feurer et al. (2021), which creates a unique portfolioof pipelines in an offline phase, AlphaD3M creates the portfolio online, based on the query taskand dataset. Also, the output is a portfolio of patterns, not of static pipelines, which allows moreflexibility to construct pipelines. These patterns are used as production rules of the grammar.Algorithm 1 in the Appendix describes the process of building the grammar.CPrioritization of Primitives. When a data scientist builds an ML pipeline, they start this processusing primitives that are known to perform well. For example, XGBoost or Random Forests aregood initial candidates for classification tasks. AlphaD3M follows this intuition to identify goodcandidate primitives for a specific task, using the data from Marvin. This prior knowledge aboutpromising primitives can be helpful to find better pipelines faster.Similar to Ono et al. (2021), AlphaD3M uses Pearson Correlation (PC) to estimate how mucha primitive contributes to the score of the pipeline. However, instead of using the raw scores, ituses the ADTMs values because they are scaled across different datasets. AlphaD3M estimatesthe primitive importance using PC between the primitive indicator vector p(pi=1if pipelineicontains the primitive in question and pi=0otherwise) and the pipeline score vector s, wheresiisthe score for pipeline i. Sincepandsare dichotomous and quantitative variables, respectively, thePoint-Biserial Correlation coefficient (PBC) Sheskin (2003) is an appropriate correlation measure – itis mathematically equivalent to the PC but can be calculated with fewer operations. The correlationvalues are normalized between 0 and 1 (using min-max normalization).AlphaD3M calculates these correlations for the primitives at two levels: (a) global, when itconsiders all the pipelines, and (b) local, when it considers only the pipelines for each pattern.The main goal is to estimate how important a primitive is for all the pipelines and each pattern.Primitives with higher values of importance should have priority during the search of pipelines.Algorithm 2 describes the process of calculating the primitive importance values in detail (see theAppendix). To prioritize the usage of potential primitives in AlphaD3M, it includes these values ofprimitive importance in the MCTS formula:U(s,a)=Q(s,a)+c(αP(s,a)+( 1−α)R(a))√︁N(s)1+N(s,a)(1)whereQ(s,a)is the expected reward for action a(selection of primitive a) from state s,N(s,a)isthe number of times action awas taken from state s,N(s)is the number of times state swas visited.P(s,a)are the probabilities predicted by the neural network over actions afrom a state s,cis aconstant which determines the amount of exploration, R(a)=G(a)∗L(a),G(a)andL(a)are theglobal and local importance of the action a, andαis a coefficient to keep the trade-off betweenR(a)andP(s,a).DDecoupled Hyperparameter Tuning. Hyperparameter tuning is an essential part of fitting machinelearning models (Bergstra et al., 2011; Snoek et al., 2015; Dolatnia et al., 2016). This is also the casefor end-to-end ML pipelines that target different tasks, and all primitives contain hyperparameters,not just the estimators.AlphaD3M performs hyperparameter tuning as an independent task, after the pipelines areconstructed. It uses Bayesian optimization, which is the state-of-the-art for hyperparameter tuning5Figure 2: (a) A code snippet to solve a semi-supervised classification task. (b) AlphaD3M allows usersto inspect the contents of the input dataset, including column statistics and data types. (c)Analyzing ML pipelines through the integration with PipelineProfiler.(Bergstra and Bengio, 2012; Snoek et al., 2015; Dolatnia et al., 2016) and was shown to outperformmanual setting of parameters, grid search, and random search (Bergstra and Bengio, 2012; Turneret al., 2021).Tuning Top- kPipelines. AlphaD3M synthesizes and evaluates the pipelines using primitives withdefault values for hyperparameters. The pipelines are then ranked by performance, and the top-kpipelines are selected for tuning. AlphaD3M uses Sequential Model-Based Algorithm Configuration(SMAC) (Lindauer et al., 2022), a Python library for Bayesian optimization. It approximates aprobability model of the performance outcome given a parameter configuration that is updatedfrom a history of executions. AlphaD3M selects the Gaussian Processes models from SMAC tominimize an arbitrary acquisition function using the Expected Improvement criterion to choose theparameter values for each iteration until a condition (number of iterations) is met. The acquisitionfunction is designed to normalize the performance metric used to synthesize the pipelines betweenzero and one, as the pipeline execution evaluations increase, the acquisition function gets closer tozero. SMAC requires a set of unique parameters to assign values during its tuning procedure. SinceAlphaD3M considers multiple primitives with identical names, it constructs an internal hierarchicalnomenclature of parameters and designs their dependencies using ConfigSpace.3.3 The APIWe have developed a Python-based API that supports the process of building and exploration of MLpipelines within a Jupyter Notebook environment. The API is integrated with the D3M AutoMLsystems and supports various dataset formats such as raw CSV, D3M, and OpenML. Model synthesiscan be done with a few lines of code, as shown in Figure 2(a). The API allows users to (a) define aproblem, (b) explore summaries of their input dataset, (c) summarize the produced pipelines and (d)analyze and compare pipelines with respect to their performance scores and prediction outputs.We describe the main components of the API below.Problem Definition. To build a predictive model, AlphaD3M needs a problem specification thatdescribes a prediction problem, specifically: (a) the training dataset; (b) a target variable, i.e., whatshould be predicted by the predictive model; (c) the maximum running time that controls how longthe search can take (to control the use of computational resources); (d) the desired performancemetric; and (e) a list of task keywords that specify the kind of prediction task and, therefore, thetechniques that should be used to solve the prediction problem. Figure 2(a) shows an example ofhow to define a problem in AlphaD3M.6Table 2: Comparison of MT-AutoML systems with respect to the number of supported task types,winner pipelines, and average rank by each system.AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Unique ML tasks supported 17 16 15 17 15 16 14 2Winner pipelines 49 39 30 21 20 11 10 7Average rank 2.85 2.89 2.90 3.99 4.68 5.32 5.73 6.85Data Exploration. To build good predictive models, it is important to identify data attributes thatlead to accurate predictions. The API provides multiple tools for data exploration. For example, itshows different visualizations (compact, detail, and column views) that summarize the content oftabular datasets (see Figure 2 (b)).Pipeline Summary. After the pipeline search is complete, users can display a leaderboard, trainindividual pipelines with the complete data, perform predictions and evaluate them against aheld-out dataset.Pipeline Exploration. Users can analyze the produced pipelines using the PipelineProfiler Onoet al. (2021), which is fully integrated into AlphaD3M as shown in Figure 2(c). PipelineProfiler isa visual analytics tool that enables users to compare and explore the pipelines generated by theAutoML systems.Pipeline Refinement and Deployment. AlphaD3M allows users to save and load pipelines, enablingusers to reload them later and perform analyses without having to re-run the AutoML search.They can load the saved pipelines at any time for training or testing purposes. In addition, userscan export pipelines to Python code. This gives them more control and the ability to modify(and customize) the automatically generated pipelines (e.g., change hyperparameters, or replacea classifier primitive). More information about the API can be found on the documentation webpage: https://alphad3m.readthedocs.io/en/latest/api.html .4 EvaluationTo demonstrate the effectiveness of AlphaD3M and its ability to handle a rich set of ML tasks, wecompared AlphaD3M with state-of-the-art AutoML systems using two dataset collections. We alsopresent use cases to show how useful, flexible, and easy to use AlphaD3M is.4.1 Comparing AutoML SystemsD3M Datasets. This collection contains challenging datasets and cover a wide variety of tasks (atotal of 17 task types) and data types (see Table 3). We evaluated all the systems using train and testsplits. In most of the cases, the sizes are 0.8 and 0.2 for the train and test splits, respectively (see thedataset’s repository2for details). For each dataset, we ran the systems over the train split for onehour, a time-bound used by others works (Erickson et al., 2020; Feurer et al., 2021). After that, weevaluated the best pipeline produced by each system in the test split. For this experiment, we used1 GPU (GeForce GTX 1080 Ti), 14 CPU cores (Intel Xeon E5-2695 v4, 2.10 GHz), and 56 GB memory.Table 2 shows the number of supported task types (ML tasks), winner pipelines (i.e., pipelineswith the best performance for a given dataset), and the average rank by each AutoML system (rankof each system among the 8 AutoML systems applied to each dataset). If two or more systemsproduce pipelines that tie in the best score, all of them are considered winner pipelines. As we cansee, AlphaD3M and Aika were able to solve 17 out of 17 unique tasks, obtaining the best coverage.We also evaluated the effectiveness of AlphaD3M. It had the best overall performance, producingthe best pipeline for 49 datasets with the best average rank (2.85). Analyzing the support for each2https://datasets.datadrivendiscovery.org/d3m/datasets7Table 3: Number of datasets by task type and number of solved datasets by each AutoML system forall task types covered by the D3M datasets.ML Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Tabular Classification (20) 20 19 18 20 18 17 13 20Tabular Regression (11) 11 11 11 8 9 6 5 9Image Classification (9) 9 8 9 9 7 7 2 0Image Regression (1) 1 1 1 1 1 1 1 0Text Classification (9) 9 9 9 9 8 8 9 0Audio Classification (2) 2 2 2 2 1 2 2 0Graph Matching (3) 3 3 3 3 2 2 2 0Time series Forecasting (13) 13 13 13 13 2 12 10 0Link Prediction (3) 3 3 3 3 2 2 2 0Collaborative Filtering (1) 1 0 1 1 0 1 0 0Time series Classification (19) 19 19 19 17 19 15 19 0Community Detection (3) 3 3 0 2 2 1 0 0Video Classification (2) 2 2 2 2 0 2 2 0Vertex Classification (4) 4 4 4 4 4 4 4 0Object Detection (2) 2 2 0 1 1 0 0 0Semisupervised Classification (6) 6 6 6 3 6 4 3 0LUPI (4) 4 4 4 4 4 4 4 0task type individually in Table 3, we can see that AlphaD3M was able to produce valid pipelinesfor all the datasets and it solved more datasets than the other systems. Even though AlphaD3M isinspired by Drori et al. (2019), in Table Table 2 and Table 3, we can clearly see the difference betweenthem, AlphaD3M handles a larger number of tasks and produces many more winned pipelines.This shows that the different components of AlphaD3M are effective at handling the larger searchspaces required by MT-AutoML systems. The detailed scores obtained by each system in all theD3M datasets and the average rank by tasks can be found in Table 4 and Table 5 (Appendix).Additionally, we calculated the number of winner pipelines for the top-3 systems only in thedatasets where all of them produced pipelines. AlphaD3M, Ensemble, and AutonML systems got 48,42, and 38, respectively. These results confirm that the superior performance of AlphaD3M is notsolely due to its support for a broader range of ML tasks.Figure 3: Ablation study for the different components of AlphaD3M.We performed an ablationstudy to analyze the contribu-tion of each component of Al-phaD3M on a random sample offive D3M datasets for classifica-tion tasks2(datasets for whichAlphaD3M obtained the best, av-erage and worst performances).Figure 3 shows the best scoresfor each dataset reached by thefull AlphaD3M and the versionswith some components removed(or replaced). As we can see, us-ing all components leads to thebest results.To evaluate the importance of the automatic grammar, we replaced it with the manually-designed grammar used in Drori et al. (2019). For POKER ,SPECTRO ,WORDS , and SICK datasets,when the manual grammar was used, AlphaD3M was not able to produce valid pipelines, whichhighlights the importance of automatically generating the grammar. These datasets contain multi-ple types of features like text, DateTime, etc., which were not covered by the manually-constructed8Figure 4: Performance of AutoML systems in OpenML Benchmark. X-axis shows the accuracy values(normalized by the best score), and Y-axis shows the IDs of the OpenML tasks.grammar. The prioritization of primitives also plays an important role in AlphaD3M. When thisfeature was not used, the performance decreased, e.g. in POKER ,SPECTRO , and LIBRAS datasets. Aswe can see in Figure 3, in most of the datasets, when we removed the hyperparameter tuning com-ponent, AlphaD3M obtained the same results. This suggests that the heuristic used by AlphaD3M(tuning only the top- kpipelines) may miss good pipelines that would attain better performanceafter tuning. In future work, we plan to investigate alternative strategies for hyperparameter tuningthat attain a better balance of computational cost and pipeline performance.OpenML Benchmark. Similar to Erickson et al. (2020), we compared our system with AutoWEKA,TPOT, H2O, AutoGluon, and Auto-Sklearn 2.0 (hereinafter referred to as Auto-Sklearn) on the 39OpenML datasets (Gijsbers et al., 2019). This corpus contains a variety of datasets intended torepresent real-world data science problems and covers binary and multiclass classification tasks.We used AMLB (Gijsbers et al., 2022) to compare the systems, running them locally for one hourusing 1 fold split and accuracy as the optimization metric. For this experiment, we used 4 CPUcores (Intel Xeon Platinum 8268 Processor, 2.9 GHz) and 32 GB memory.Figure 4 shows the scores (normalized by the best score) of all the systems (the detailed scorescan be found in Tables 6 and 7 in the Appendix). As we can see, AlphaD3M produced pipelineswhose performance is on par with the other AutoML systems. We also calculated the averagerank for all the systems for the 39 datasets. AlphaD3M got 3.64 of average rank, while Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA got 2.08, 2.33, 3.08, 3.72, and 5.10, respectively.To understand better these numbers, we also estimated the performance gain of the pipelines foundby AlphaD3M against pipelines generated by other systems. The average gain of AlphaD3M forthe OpenML datasets was +0.001, which shows that, in general, AlphaD3M attained good resultsfor this collection. We analyzed the 3 datasets ( task_146195 ,task_167119 andtask_168331 ) forwhich AlphaD3M generated pipelines with performance lower than other systems. This happenedbecause these datasets are imbalanced with multiple classes. The performance of AlphaD3M forthese could be improved with the inclusion of primitives to handle imbalanced datasets. Thisunderscores the importance of being able to add primitives to AutoML systems.Concerning the coverage, it is important to highlight that AlphaD3M succeeded for 38 datasets.Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA solved 39, 39, 34, 29, and 28 datasets,respectively. As pointed out by Gijsbers et al. (2022), the results of Auto-Sklearn on the OpenMLdatasets must be considered very carefully, since there could be an overlap between the datasetsused in its meta-learning process and the ones used in the evaluation. It’s important to highlightthat none of the OpenML datasets are included in the version of Marvin that was used by AlphaD3Min these experiments.94.2 Use CasesPivoting across ML tasks. Predicting hostile actions against ships and mariners worldwide isimportant to prevent piracy and prosecute the aggressors. Consider that an analyst from the U.S.National Geospatial-Intelligence Agency (NGA) is building a model using the Anti-Shipping ActivityMessages dataset (ASAM, 2021). She wants to identify which records mention guns and whichrecords do not. This is a non-trivial problem since a variety of terms (e.g., pistol, rifle, etc.) indicatewhether a gun is present. This dataset contains 8,000 documents, of which 1,400 were annotated.She started by using AlphaD3M to create models using the 1,400 labeled documents setting themodel search to 1 hour. AlphaD3M derived high-quality pipelines – the best pipeline had 0.90 ofF1. However, she wondered whether these pipelines could be further improved, in particular, byleveraging the 6,600 unlabeled documents through semi-supervised learning. AlphaD3M supportsa wide range of tasks, including semi-supervised learning – users just need to add the keyword“semiSupervised” as a parameter. The user then ran a new experiment using the 1,400 labeled and6,000 unlabeled instances as a training dataset. The results improved from 0.90 to 0.95 of F1. Theseexperiments show that by using AlphaD3M, data scientists can improve the results, pivoting fromone task (classification) to another (semi-supervised classification) very quickly.Reducing pipeline execution time through models exploration. Using content analysis andpredictive modeling for conflict assessment is a common approach for conflict analysts to guidepolicy-making decisions D’Orazio (2020). Consider a conflict analyst trying to categorize explosionevents that involve terrorist activities. She uses the explosion events dataset (Raleigh et al., 2010)that contains 20,000 articles describing events that involve terrorist activities. An article is relevantif it describes attacks involving explosions. To create classification models, she ran AlphaD3M for 1hour. The system synthesized high-quality pipelines, with F1 values around 0.9. To identify themost suitable pipeline, she used the PipelineProfiler to explore the derived models. She observedthat the top-10 pipelines had similar scores but their execution times were above 800 seconds. Toaddress this problem, she tried a different strategy: combining progressive sampling and activelearning to reduce the number of training data from 20,000 to 3,200 documents. Then, she re-ranAlphaD3M using the smaller set as the training dataset, while keeping the rest of the workflowunchanged. The top F1 score improved from 0.91 to 0.96 and the time from 800 to 125 seconds.5 ConclusionsWe introduced AlphaD3M, an MT-AutoML library that automatically synthesizes end-to-endpipelines for 17 ML tasks and 6 different data types. AlphaD3M introduces new methods to auto-matically derive grammars and prioritize primitives, which are essential for effectively managingthe large space MT-AutoML systems must search. In addition, AlphaD3M embraces a user-in-the-loop approach, through an API that allows the users to explore the input data and the derived MLpipelines, as well as customized the pipelines. We presented a detailed experimental evaluationthat compares our approach to several state-of-the-art AutoML systems over different problemsand datasets. The results suggest that AlphaD3M is effective: not only does it solve a larger numberof problem types, but it also derives pipelines with performance that is superior or on par withthose derived by other systems.Although AlphaD3M’s approach is primitive-agnostic, so far, it only relies on the D3M primitivesto build ML pipelines. We plan to extend AlphaD3M by including additional state-of-the-artand more-recent primitives, e.g., models published in HuggingFace or PyTorch Hub repositories.Moreover, we would like to improve the system interoperability with existing open-source primitivesthat use standard APIs such as the well-known scikit-learn’s fit-predict API.Acknowledgements. This work was partially supported by the DARPA D3M program. Anyopinions, findings, conclusions, or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of DARPA.10ReferencesASAM (2021). ASAM: Anti-Shipping Activity Messages. https://msi.nga.mil/Piracy .Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for Hyper-Parameter Opti-mization. In Proceedings of NIPS , pages 2546–2554.Bergstra, J. and Bengio, Y. (2012). Random Search for Hyper-parameter Optimization. JMLR , pages281–305.Cashman, D., Humayoun, S. R., Heimerl, F., Park, K., Das, S., Thompson, J., Saket, B., Mosca, A.,Stasko, J. T., Endert, A., Gleicher, M., and Chang, R. (2018). Visual Analytics for AutomatedModel Discovery. CoRR .D3M (2022). D3M Website. https://datadrivendiscovery.org .D3M Primitives (2022). D3M Primitives Website. https://gitlab.com/datadrivendiscovery/primitives/-/tree/master/primitives .Datamart Profiler Library (2021). Datamart Profiler Website. https://pypi.org/project/datamart-profiler/ .Dolatnia, N., Fern, A., and Fern, X. (2016). Bayesian Optimization with Resource Constraints andProduction. In Proceedings of ICAPS , pages 115–123.D’Orazio, V. (2020). Conflict Forecasting and Prediction. In Oxford Research Encyclopedia ofInternational Studies . Oxford University Press.Drori, I., Krishnamurthy, Y., Lourenco, R., Rampin, R., Cho, K., Silva, C., and Freire, J. (2019).Automatic Machine Learning by Pipeline Synthesis using Model-based Reinforcement Learningand a Grammar. In 6th ICML Workshop on Automated Machine Learning .Elliott, J. (2020). DARPA Data-Driven Discovery of Models (D3M) Program. https://www.darpa.mil/program/data-driven-discovery-of-models .Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 .Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2021). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M.,and Garnett, R., editors, Advances in Neural Information Processing Systems , volume 28. CurranAssociates, Inc.Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren,J. (2022). Amlb: an automl benchmark.Gijsbers, P., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J. (2019). An Open SourceAutoML Benchmark. In 6th ICML Workshop on Automated Machine Learning .Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., Gadewar, S., Yang, Q., and Jahanshad, N.(2019). Towards Human-guided Machine Learning. In Proceedings of the Conference on IntelligentUser Interfaces (IUI) , pages 614–624. ACM.11Google Cloud AutoML (2020). Google Cloud AutoML Website. https://cloud.google.com/automl .Grafberger, S., Guha, S., Stoyanovich, J., and Schelter, S. (2021). MLINSPECT: a Data DistributionDebugger for Machine Learning Pipelines. age, 20:123.Habibi, M., Starlinger, J., and Leser, U. (2020). Tabsim: A Siamese Neural Network for AccurateEstimation of Table Similarity. In 2020 IEEE International Conference on Big Data (Big Data) ,pages 930–937. IEEE.He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778.Hutter, F., Kotthoff, L., and Vanschoren, J. (2019). Automated Machine Learning: Methods, Systems,Challenges . Springer.Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., and Leyton-Brown, K. (2017). Auto-WEKA 2.0:Automatic Model Selection and Hyperparameter Optimization in WEKA. The Journal of MachineLearning Research , 18(1).LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable Automatic Machine Learning. 7th ICMLWorkshop on Automated Machine Learning (AutoML) .Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Ruhkopf, T.,Sass, R., and Hutter, F. (2022). Smac3: A versatile bayesian optimization package for hyperpa-rameter optimization. Journal of Machine Learning Research , 23(54):1–9.Marvin (2020). Marvin Website. https://datadrivendiscovery.org/marvin .Olson, R. S. and Moore, J. H. (2016). TPOT: A Tree-based Pipeline Optimization Tool for AutomatingMachine Learning. In ICML AutoML Workshop , pages 66–74.Ono, J. P., Castelo, S., López, R., Bertini, E., Freire, J., and Silva, C. T. (2021). PipelineProfiler: AVisual Analytics Tool for the Exploration of AutoML Pipelines. IEEE Transactions on Visualizationand Computer Graphics , 27:390–400.Raleigh, C., Linke, A., Hegre, H., and Karlsen, J. (2010). Introducing ACLED: An Armed ConflictLocation and Event Dataset: Special Data Feature. Journal of peace research , 47(5):651–660.Santos, A., Castelo, S., Felix, C., Ono, J. P., Yu, B., Hong, S. R., Silva, C. T., Bertini, E., and Freire,J. (2019). Visus: An Interactive System for Automatic Machine Learning Model Building andCuration. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) , pages1–7. Association for Computing Machinery.Sheskin, D. J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures . crc Press.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,Kumaran, D., Graepel, T., et al. (2017). Mastering Chess and Shogi by Self-Play with a GeneralReinforcement Learning Algorithm. Conference on Neural Information Processing Systems .Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M. M. A., Prabhat,P., and Adams, R. P. (2015). Scalable Bayesian Optimization Using Deep Neural Networks. InProceedings of the ICML , pages 2171–2180.Trabelsi, M., Chen, Z., Zhang, S., Davison, B. D., and Heflin, J. (2022). StruBERT: Structure-awareBERT for Table Search and Matching. arXiv preprint arXiv:2203.14278 .12Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2021). BayesianOptimization is Superior to Random Search for Machine Learning Hyperparameter Tuning:Analysis of the Black-Box Optimization Challenge 2020. CoRR , abs/2104.10201.Wilson, G. T. (2016). Time Series Analysis: Forecasting and Control, 5th Edition. Journal of TimeSeries Analysis , 37(5):709–711.Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015). Learning Hyperparameter OptimizationInitializations. In 2015 IEEE international conference on data science and advanced analytics(DSAA) , pages 1–10. IEEE.13A Broader Impact StatementAlphaD3M can potentially strengthen the efforts in democratizing data science by broadening theapplication of automated predictive pipelines. Subject experts can create their own pipelines andexplore them in the context of an ethical framework. Its interoperable software infrastructureenables external auditing and improves the trust and interpretability of synthesized pipelines.The search space management mechanism also allows efficient resource allocation and helps toprototype pipelines before performing high energy-consuming model training.B Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See it mainly in Section 3 and 4.(b)Did you describe the limitations of your work? [Yes] See Section 5. We also discuss theinfeasibility of AutoML system in general, and our efforts to mitigate limitations.(c)Did you discuss any potential negative societal impacts of your work? [No] However, weadvocate for the necessity of human-in-the-loop to build trust in the generated pipelines.(d)Have you read the ethics review guidelines and ensured that your paper conforms to them?https://automl.cc/ethics-accessibility/ [Yes] Our paper follows these guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We are not includingtheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We are not includingtheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We provide a link to our public GitLab repository and documentationwebpage, where users can find information about the installation and instructions to runour system. The reported evaluation was conducted by a third (independent) party in acompetition among AutoML systems, so we can not release that code.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] See the scripts/paper_automlconference folder in our repository.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Seethescripts/paper_automlconference folder in our repository.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] Our code is well documented and follows codingstandards and best practices. We provide different Jupyter notebook examples and an APIto show how to use AlphaD3M.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [No] We do not specify allthe details.14However, some details, like the data split and search spaces are publicly available in thereferences.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] See Section 4.1.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 4.1.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Wepresented two comparisons (see Section 4). For the first comparison, we used the sameprotocol. For the second one, we used an existing asset and we evaluated our system usingthe same time protocol.(i)Did you compare performance over time? [No] We ran the systems during one hour, atime-bound used by others works (Erickson et al., 2020; Feurer et al., 2021), and reportedthe best score during this time.(j)Did you perform multiple runs of your experiments and report random seeds? [N/A] Wedo not perform multiple runs of our experiments.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not report error bars.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We did notuse surrogate benchmarks.(m)Did you include the total amount of compute and the type of resources used (e.g., typeofgpus, internal cluster, or cloud provider)? [No] Some of the reported evaluations wereconducted by a third party.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] The hyperparameters were automaticallytuned by our AutoML engine.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4.1.(b)Did you mention the license of the assets? [No] However, all assets are publicly availableand the licenses can be retrieved from the references.(c)Did you include any new assets either in the supplemental material or as a url? [Yes] Weincluded a urlto the data used in the experiments.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The assets used in this paper are publicly available.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The data used do not contain personally identifiableinformation neither offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not carry out a user study.15(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not carry out a user study.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not carry out a user study.C Additional DetailsC.1 AlgorithmsAlgorithm 1 describes the process of building the grammar. getVectorTK andgetVectorST repre-sent the BOW and one-hot encoding functions, respectively. The best values empirically calculatedfor the thresholds tsimandtperfare 0.8 and 0.5, respectively.Algorithm 1 Grammar BuilderInput: Marvin datasets D, query dataset q, thresholdtInitializeS=[]// Similar datasetsfordiinDdosimTK =cosineSimilarity(getVectorTK(di),getVectorTK(q))ifsimTK >tsimthensimST =cosineSimilarity(getVectorST(di),getVectorST(q))ifsimST >tsimthenAddditoSInitializeP=calculateADTM(S)InitializeR=[]// Production RulesforpiinPdoifperformance(pi)>tperfthenri=convertToPattern(pi))AddritoRreturnRAlgorithm 2 describes the process of calculating the primitive importance values in detail. Forinstance, the primitive importance values calculated for XGBoost and Random Forrest are 0.62 and0.56, whereas for Nearest Centroid and K-Nearest Neighbors the values are 0.46 and 0.44. It showsthat the importance values can be used as an indicator to prioritize the usage of primitives.Algorithm 2 Primitives ImportanceInput: PipelinesP, PatternsTInitializeR=getPrimitives(P)InitializeG,L=[]// Global and Local correlationsforriinRdopc=PearsonCorrelation (ri,P)npc=normalize(pc)AddnpctoGfortiinTdopi=getPipelines(ti,P)R=getPrimitives(ti,pi)forriinRdopc=PearsonCorrelation (ri,R)npc=normalize(pc)AddnpctoLreturn(G,L)16C.2 GrammarsDifferent tasks require different grammars. For instance, the algorithms needed to solve time-series and semi-supervised classification problems have a different structure and use a differentset of primitives. Consequently, specialized grammars and production rules are needed for eachtask. Manually creating these grammars is time-consuming and error-prone, and relying on thesegrammars can limit the effectiveness of the AutoML systems with respect to problem coverage andquality of the derived pipelines.Figure 5 shows an excerpt of a grammar automatically generated in AlphaD3M to solve classi-fication problems. The start symbol ( S) is the starting point from which all the production rulescan be derived. In the grammar, the terminal ‘primitive’ can be any of the available algorithms inAlphaD3M, and ‘E’represents the empty symbol.S ::= CATEGORICAL_ENCODER TEXT_FEATURIZER DATA_CONVERSION IMPUTATION CLASSIFICATIONS ::= TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER DIMENSIONALITY_REDUCTION CLASSIFICATIONS ::= DATA_STRUCTURE_ALIGNMENT IMPUTATION CLASSIFICATIONS ::= IMPUTATION FEATURE_SCALING CLASSIFICATIONS ::= IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION DIMENSIONALITY_REDUCTION CLASSIFICATIONIMPUTATION ::= 'primitive '|'E'CATEGORICAL_ENCODER ::= 'primitive '|'E'FEATURE_SCALING ::= 'primitive '|'E'FEATURE_SELECTION ::= 'primitive '|'E'DIMENSIONALITY_REDUCTION ::= 'primitive '|'E'DATA_CONVERSION ::= 'primitive 'TEXT_FEATURIZER ::= 'primitive 'DATA_STRUCTURE_ALIGNMENT ::= 'primitive 'CLASSIFICATION ::= 'primitive 'Figure 5: Excerpt of a grammar automatically generated by AlphaD3M for classification tasksIn Figure 6, you can see the manual grammar used in the experiments. This grammar wasproposed by Drori et al. (2019). To generate this grammar for classification and regression tabulartasks, a developer was asked to review manually the primitives to group them into categories. Forinstance, the primitives decision _tree.SKlearn andrandom _forest.SKlearn were grouped into thecategory ‘CLASSIFICATION’. Then, using his knowledge in ML, he created the production rules ofthe grammar using these categories.S ::= CLASSIFICATION_TASK | REGRESSION_TASKCLASSIFICATION_TASK ::= CLASSIFICATION | DATA_CLEANING CLASSIFICATION | DATA_TRANSFORMATION CLASSIFICATION |DATA_CLEANING DATA_TRANSFORMATION CLASSIFICATIONREGRESSION_TASK ::= REGRESSION | DATA_CLEANING REGRESSION | DATA_TRANSFORMATION REGRESSION |DATA_CLEANING DATA_TRANSFORMATION REGRESSIONCLASSIFICATION ::= 'primitive 'REGRESSION ::= 'primitive 'DATA_CLEANING ::= 'primitive 'DATA_CLEANING | 'E'DATA_TRANSFORMATION ::= 'primitive 'DATA_TRANSFORMATION | 'E'Figure 6: Manual GrammarC.3 ExperimentsIn Table 4, we can see the scores obtained by all AutoML systems developed in the D3M program,including a majority voting ensemble system, on a collection of 112 datasets2. This collection17contains challenging datasets that go beyond the simple tabular data and cover a wide variety oftasks and data types.Table 4: Scores obtained by AlphaD3M and the other AutoML systems developed in the D3M program.Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori124_120_mnist_8747 0.98 0.94 0.46 0.18 0.94 0.11 - -124_138_cifar100_1858 0.67 0.48 0.42 0.12 0.48 0.01 - -124_16_fashion_mnist 0.90 0.83 0.84 0.12 0.85 0.10 - -124_174_cifar10_MIN 0.88 0.82 0.84 0.27 0.80 0.10 - -124_188_usps_MIN 0.96 0.95 0.94 0.26 0.92 0.18 0.11 -124_214_coil20_MIN 0.99 0.99 0.99 0.85 0.97 - - -124_95_uc_merced_land_use_MIN 0.90 - 0.72 0.52 - 0.05 0.33 -1491_one_hundred_plants_margin_MIN 0.80 0.79 0.88 0.92 0.75 0.83 0.81 0.831567_poker_hand_MIN 0.90 0.84 0.28 0.48 0.12 0.13 - 0.27185_baseball_MIN 0.66 0.70 0.65 0.68 0.68 0.67 0.66 0.64196_autoMpg_MIN 6.57 9.12 5.74 11.95 7.49 6.01 15.36 7.0322_handgeometry_MIN 0.24 0.23 0.23 0.14 0.80 0.36 0.36 -26_radon_seed_MIN 0.02 0.02 0.24 0.03 0.02 0.06 1.40 0.0227_wordLevels_MIN 0.32 0.28 0.28 0.32 0.29 0.27 0.26 0.27299_libras_move_MIN 0.98 - - 0.48 - - 0.98 0.9730_personae_MIN 0.62 0.65 0.65 0.62 0.61 0.55 0.61 -313_spectrometer_MIN 0.43 0.37 0.37 0.30 0.32 0.33 0.23 0.4031_urbansound_MIN 0.93 0.93 0.91 0.75 0.92 0.77 0.49 -32_fma_MIN 0.55 0.57 0.34 0.28 - 0.11 0.11 -32_wikiqa_MIN 0.00 0.02 0.14 0.13 0.50 - 0.13 -38_sick_MIN 1.00 1.00 - 1.00 - - 0.49 1.004550_MiceProtein_MIN 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.0049_facebook_MIN 0.88 0.87 0.87 0.87 0.87 0.88 0.44 -534_cps_85_wages_MIN 20.11 20.35 22.07 23.15 24.86 21.44 - 20.7056_sunspots_MIN 34.55 11.82 8.64 8.45 58.30 9.40 90.60 -56_sunspots_monthly_MIN 64.61 41.18 46.86 41.04 - 62.20 27.74 -57_hypothyroid_MIN 0.96 0.98 0.99 0.98 0.74 0.99 0.97 0.9859_LP_karate_MIN 0.93 0.45 0.83 0.83 0.45 0.45 0.93 -59_umls_MIN 0.92 0.94 0.94 0.94 0.94 0.70 0.73 -60_jester_MIN 4.25 - 4.24 4.15 - 4.51 - -66_chlorineConcentration_MIN 0.82 0.86 0.81 0.52 0.78 0.68 0.23 -6_70_com_amazon_MIN 0.85 0.85 - 0.85 0.85 - - -6_86_com_DBLP_MIN 0.72 0.72 - 0.72 0.72 - - -JIDO_SOHR_Articles_1061 0.98 0.94 0.94 0.81 0.56 0.60 0.64 -JIDO_SOHR_Tab_Articles_8569 1.00 0.99 1.00 1.00 0.56 1.00 1.00 -LL0_1100_popularkids_MIN 0.42 0.45 0.38 0.38 0.40 0.44 - 0.47LL0_186_braziltourism_MIN 0.14 0.35 0.36 0.17 0.24 0.20 0.34 0.16LL0_207_autoPrice_MIN 4.89·1065.76·1066.04·1063.76·1075.36·1065.43·1061.56·1085.81·106LL0_acled_reduced_MIN 0.83 0.88 0.89 0.84 0.91 0.85 0.74 0.91LL0_jido_reduced_MIN 0.90 0.89 0.91 0.90 0.90 0.90 - 0.90LL1_2734_CLIR 0.88 0.50 0.52 0.88 - - 0.50 -LL1_336_MS_Geolife_transport_MIN 0.60 1.00 0.99 - 0.85 - 0.98 -LL1_336_MS_Geolife_transport_separate 0.67 1.00 0.99 - 0.86 - 0.99 -LL1_3476_HMDB_actio_recognition_MIN 0.11 1.00 0.90 0.11 - 0.48 0.08 -LL1_50words_MIN 0.35 0.55 0.56 0.41 0.51 0.45 0.35 -LL1_726_TIDY_GPS_carpool 0.54 0.58 0.58 0.46 0.59 - 0.63 -LL1_736_population_spawn_MIN 1636.12 1806.40 1804.76 1644.26 - 2845.89 - -LL1_736_population_spawn_simpler_MIN 1346.10 1490.15 3669.54 1347.65 1323.72 1550.40 19887.20 -LL1_736_stock_market_MIN 7.64 1.49 8.69 1.75 - 30.66 - -LL1_ACLED_TOR_online_behavior_MIN 0.40 0.05 0.44 0.64 0.43 0.66 0.08 0.40LL1_Adiac_MIN 0.75 0.70 0.73 0.54 0.67 0.70 0.49 -LL1_ArrowHead_MIN 0.75 0.82 0.78 0.72 0.65 0.55 0.72 -LL1_CONFLICT_3457_atrocity 9.53 6.75 11.43 12.84 - 17.21 13.91 -LL1_Cricket_Y_MIN 0.52 0.54 0.59 0.52 0.62 0.53 0.45 -LL1_DIC28_net_MIN 0.84 0.80 0.80 0.80 0.80 0.84 - -LL1_ECG200_MIN 0.90 0.87 0.87 0.86 0.91 0.85 0.86 -LL1_EDGELIST_net_nomination_MIN 0.99 0.66 0.85 0.94 0.66 0.35 0.84 -LL1_ElectricDevices_MIN 0.54 0.42 0.46 0.06 0.44 0.27 0.31 -LL1_FISH_MIN 0.80 0.87 0.89 0.73 0.84 0.86 0.78 -LL1_FaceFour_MIN 0.84 0.83 0.71 0.55 0.65 0.40 0.66 -18(Table 4: Continued from the previous page)Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriLL1_GS_process_classification_tab_MIN 0.80 0.80 0.80 0.80 0.80 0.73 - 0.81LL1_GS_process_classification_text_MIN 0.65 0.80 0.65 0.80 0.80 0.76 0.80 -LL1_GT_actor_group_association_MIN 0.25 0.13 0.17 0.13 - - - -LL1_HandOutlines_MIN 0.89 0.91 0.90 0.88 0.88 0.88 0.88 -LL1_Haptics_MIN 0.43 0.42 0.44 0.42 0.41 0.45 0.42 -LL1_ItalyPowerDemand_MIN 0.93 0.95 0.95 0.95 0.95 0.91 0.90 -LL1_MIL_MUSK 0.68 0.77 0.83 0.67 0.80 0.80 - 0.72LL1_MIL_Mutagenesis 0.80 0.73 0.72 0.71 0.70 0.63 - 0.79LL1_MITLL_synthetic_vora_E_2538 0.29 0.53 0.52 0.50 0.31 0.44 - 0.38LL1_Meat_MIN 0.95 0.94 0.88 0.92 0.88 0.17 0.95 -LL1_OSULeaf_MIN 0.53 0.44 0.52 0.77 0.45 0.47 0.32 -LL1_PHEM_Monthly_Malnutrition_MIN 10.63 9.56 9.39 9.73 - 12.18 - -LL1_PHEM_weekly_malnutrition_MIN 3.34 4.32 3.45 2.94 - 4.23 4.18 -LL1_TXT_CLS_3746_newsgroup_MIN 0.60 0.46 0.55 0.48 0.60 0.45 0.23 -LL1_TXT_CLS_SST_Binary 0.73 0.82 0.82 0.55 - 0.51 0.53 -LL1_TXT_CLS_airline_opinion_MIN 0.81 0.80 0.81 0.80 0.81 0.72 0.72 -LL1_TXT_CLS_apple_products_sent_MIN 0.73 0.71 0.72 0.72 0.73 0.66 0.69 -LL1_VID_UCF11_MIN 0.99 0.99 0.25 0.27 - 0.02 0.08 -LL1_VTXC_1343_cora_MIN 0.61 0.04 0.22 0.17 0.04 0.13 0.52 -LL1_VTXC_1369_synthetic_MIN 0.95 0.22 0.33 0.21 0.22 0.19 0.48 -LL1_ViEWS_CM_S1 0.69 1.20 0.90 0.72 0.75 2.52 - 0.82LL1_ViEWS_PGM_S1 0.02 0.04 0.02 - 0.02 0.02 0.30 0.02LL1_bigearth_landuse_detection 0.90 0.96 0.76 0.65 0.21 - - -LL1_bn_fly_drosophila_medulla_net_MIN 0.24 0.24 - - - 0.19 - -LL1_h1b_visa_apps_7480 0.44 0.47 0.43 0.44 0.41 0.41 0.47 0.42LL1_net_nomination_seed_MIN 0.99 0.99 0.96 0.94 0.99 0.34 0.46 -LL1_penn_fudan_pedestrian_MIN 0.94 0.94 - 0.94 0.94 - - -LL1_retail_sales_total_MIN 1989.19 1921.54 1941.06 1966.30 1992.17 - 1971.76 2022.41LL1_terra_canopy_height_s4_100_MIN 113.04 68.44 39.02 52.21 - 79.86 343.27 -LL1_terra_canopy_height_s4_70_MIN 104.92 547.94 126.06 136.32 - 169.63 136.98 -LL1_terra_canopy_height_s4_80_MIN 112.95 92.95 32.57 74.59 - 111.49 74.54 -LL1_terra_canopy_height_s4_90_MIN 117.13 85.73 35.12 60.44 - 104.49 60.45 -LL1_terra_leaf_angle_mean_s4_MIN 0.04 0.09 0.05 0.04 - - 0.05 -LL1_tidy_terra_panicle_detection_MIN 0.01 0.03 - - - - - -SEMI_1040_sylva_prior_MIN 0.93 0.90 0.93 - 0.92 - - -SEMI_1044_eye_movements_MIN 0.52 0.57 0.61 0.55 0.60 0.53 0.54 -SEMI_1053_jm1_MIN 0.26 1.00 0.16 - 0.16 0.41 - -SEMI_1217_click_prediction_small_MIN 0.04 0.03 0.04 - 0.17 - - -SEMI_1459_artificial_characters_MIN 0.68 0.99 0.83 0.99 0.67 0.61 0.52 -SEMI_155_pokerhand_MIN 0.58 0.66 0.60 0.05 0.64 0.50 0.51 -kaggle_music_hackathon_MIN 21.88 17.56 19.64 24.24 21.79 - - 21.85loan_status_MIN 0.40 0.50 0.51 0.44 0.33 - 0.48 0.46political_instability_MIN 0.81 0.89 0.89 0.89 0.89 - 0.88 -uu1_datasmash_MIN 1.00 1.00 1.00 1.00 0.61 1.00 1.00 -uu2_gp_hyperparameter_estimation_MIN 0.89 0.88 0.57 0.89 - - - 0.89uu3_world_development_indicators_MIN 2.39·10105.54·10124.12·1012-4.40·1012- - -uu3_world_development_indicators_raw 7.83·10131.04·10125.22·1011- - - - -uu4_SPECT_MIN 0.00 0.92 0.92 0.90 0.89 0.90 0.78 -uu5_heartstatlog_MIN 0.70 0.69 0.72 0.62 0.61 0.72 0.67 -uu6_hepatitis_MIN 0.00 0.47 0.89 0.40 0.27 0.31 0.44 -uu7_pima_diabetes_MIN 0.59 0.57 0.60 0.57 0.60 0.63 0.57 -uu_101_object_categories_MIN 0.95 0.89 0.84 0.34 - 0.10 - -19The average rank values obtained by different AutoML systems for each task type in the D3Mdatasets can be seen in Table 5. These datasets contain a total of 17 unique ML tasks.Table 5: Average rank values by task obtained by different AutoML systems.Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriImage Classification 1.11 2.78 2.78 4.56 4.33 6.22 7.44 8.00Tabular Classification 3.75 3.30 3.35 3.85 4.85 4.65 5.85 3.55Tabular Regression 2.27 3.18 3.00 5.73 4.27 5.73 7.54 4.36Image Regression 4.00 2.00 2.00 1.00 7.00 5.00 5.00 8.00Text Classification 2.56 3.33 2.22 3.00 3.56 5.78 4.33 8.00Audio Classification 1.50 1.00 3.50 5.00 5.50 5.00 6.00 8.00Graph Matching 1.00 3.33 3.00 2.33 4.67 3.33 6.33 8.00Time series Forecasting 3.38 3.62 2.62 2.23 7.31 5.08 5.08 8.00Link Prediction 3.33 2.33 2.33 1.67 4.67 6.67 5.00 8.00Collaborative Filtering 3.00 8.00 2.00 1.00 8.00 4.00 8.00 8.00Time series Classification 3.26 2.26 2.16 4.68 3.79 5.32 4.53 8.00Community Detection 1.00 1.00 8.00 3.33 3.33 6.33 8.00 8.00Video Classification 2.50 1.00 3.00 3.50 8.00 4.50 5.50 8.00Vertex Classification 1.00 4.00 3.25 4.25 4.00 6.50 3.50 8.00Object Detection 1.50 1.00 8.00 4.50 4.50 8.00 8.00 8.00Semisupervised Classification 3.50 2.33 2.33 6.00 2.83 6.00 6.83 8.00LUPI 5.25 3.00 1.25 4.50 5.00 2.50 4.75 8.0020Table 6 and Table 7 show the raw and normalized scores (normalized by the best score) obtainedby each system on the 39 datasets of the OpenML AutoML Benchmark (Gijsbers et al., 2019).This benchmark represents real-world data science problems and covers binary and multiclassclassification tasks. Additionally, Table 6 shows the gain of AlphaD3M regarding the other systems.Table 6: Raw scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3M Gaintask_10101 0.76 0.76 0.76 0.76 0.76 0.79 0.03task_12 0.98 0.98 0.98 0.98 - 0.96 -0.01task_146195 0.88 0.71 0.86 0.88 0.85 0.81 -0.03task_146212 1.00 1.00 1.00 1.00 1.00 1.00 0.00task_146606 0.74 0.60 0.73 0.72 - 0.73 0.03task_146818 0.91 0.86 0.84 0.90 0.87 0.87 -0.01task_146821 0.99 1.00 1.00 1.00 1.00 0.97 -0.03task_146822 0.97 0.97 0.97 0.97 0.98 0.97 0.00task_146825 0.91 - 0.91 0.90 - 0.86 -0.05task_14965 0.91 0.88 0.91 0.91 0.91 0.91 0.00task_167119 0.92 0.80 0.94 0.96 0.90 0.83 -0.08task_167120 0.51 0.51 0.51 0.51 - 0.51 -0.00task_168329 0.40 0.27 0.38 0.35 0.35 0.37 0.02task_168330 0.73 0.65 0.73 0.73 0.70 0.72 0.01task_168331 0.73 0.62 0.73 0.69 0.66 0.66 -0.02task_168332 0.56 - 0.54 0.51 0.44 0.41 -0.10task_168335 0.94 - 0.94 - 0.93 0.94 -0.00task_168337 0.84 - 0.86 0.83 0.77 0.61 -0.21task_168338 1.00 - 1.00 1.00 0.99 0.97 -0.03task_168868 0.99 0.99 0.99 1.00 0.99 0.99 0.00task_168908 0.74 0.73 0.76 0.72 - 0.77 0.03task_168909 0.99 0.96 0.99 0.98 - 0.99 0.01task_168910 0.72 0.60 0.72 0.72 0.71 0.65 -0.04task_168911 0.81 0.82 0.82 0.82 0.81 0.81 -0.01task_168912 0.93 0.92 0.95 0.95 0.95 0.94 -0.00task_189354 0.67 - 0.67 0.61 0.67 0.65 -0.01task_189355 0.94 - 0.00 - - 0.88 0.41task_189356 0.71 - 0.69 - - - -task_3 0.99 0.93 0.99 1.00 0.99 0.99 0.01task_31 0.77 0.66 0.82 - 0.82 0.77 0.00task_34539 0.95 - 0.95 0.95 0.95 0.95 -0.01task_3917 0.87 - 0.86 - 0.88 0.86 -0.01task_3945 0.98 - 0.98 0.98 0.98 0.98 0.00task_53 0.86 0.67 0.85 0.88 - 0.82 0.01task_7592 0.87 0.87 0.87 0.86 0.87 0.87 0.00task_7593 0.97 0.66 0.96 0.80 - 0.95 0.10task_9952 0.88 0.91 0.90 0.90 0.91 0.91 0.01task_9977 0.98 0.95 0.97 0.98 0.97 0.96 -0.00task_9981 0.94 0.86 0.96 0.94 0.96 0.94 0.0121Table 7: Normalized scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3Mtask_10101 0.97 0.97 0.97 0.97 0.97 1.00task_12 0.99 1.00 0.99 0.99 - 0.98task_146195 1.00 0.81 0.98 1.00 0.97 0.92task_146212 1.00 1.00 1.00 1.00 1.00 1.00task_146606 1.00 0.82 1.00 0.98 - 0.99task_146818 1.00 0.94 0.92 0.98 0.95 0.95task_146821 0.99 1.00 1.00 1.00 1.00 0.97task_146822 1.00 0.99 1.00 1.00 1.00 1.00task_146825 1.00 - 0.99 0.99 - 0.94task_14965 1.00 0.96 1.00 1.00 1.00 1.00task_167119 0.96 0.83 0.98 1.00 0.94 0.86task_167120 1.00 1.00 1.00 0.99 - 0.99task_168329 1.00 0.69 0.96 0.88 0.89 0.94task_168330 1.00 0.89 1.00 1.00 0.97 0.98task_168331 1.00 0.84 1.00 0.95 0.90 0.91task_168332 1.00 - 0.98 0.93 0.80 0.75task_168335 1.00 - 1.00 - 0.99 0.99task_168337 0.98 - 1.00 0.97 0.89 0.71task_168338 1.00 - 1.00 1.00 0.99 0.97task_168868 1.00 0.99 1.00 1.00 1.00 1.00task_168908 0.97 0.96 0.99 0.94 - 1.00task_168909 1.00 0.97 1.00 0.99 - 1.00task_168910 1.00 0.83 1.00 1.00 0.98 0.90task_168911 0.99 1.00 1.00 1.00 0.99 0.98task_168912 0.98 0.97 0.99 1.00 1.00 0.98task_189354 1.00 - 1.00 0.91 1.00 0.96task_189355 1.00 - 0.00 - - 0.94task_189356 1.00 - 0.97 - - -task_3 1.00 0.94 1.00 1.00 1.00 1.00task_31 0.94 0.80 1.00 - 1.00 0.94task_34539 1.00 - 1.00 1.00 0.99 0.99task_3917 0.99 - 0.98 - 1.00 0.98task_3945 1.00 - 1.00 0.99 1.00 1.00task_53 0.97 0.76 0.96 1.00 - 0.93task_7592 1.00 0.99 1.00 0.99 1.00 1.00task_7593 1.00 0.68 0.99 0.82 - 0.97task_9952 0.96 0.99 0.98 0.98 1.00 0.99task_9977 1.00 0.97 1.00 1.00 1.00 0.99task_9981 0.98 0.89 1.00 0.98 1.00 0.9822
aBnVb0TMwQ
71eJdMzCCIi
automl.cc/AutoML/2023/ABCD_Track
2023
AlphaD3M: An Open-Source AutoML Library for Multiple ML Tasks
["Roque Lopez", "Raoni Lourenco", "Remi Rampin", "Sonia Castelo", "A\u00e9cio S. R. Santos", "Jorge Henrique Piazentin Ono", "Claudio Silva", "Juliana Freire"]
We present AlphaD3M, an open-source Python library that supports a wide range of machine learning tasks over different data types. We discuss the challenges involved in supporting multiple tasks and how AlphaD3M addresses them by combining deep reinforcement learning and meta-learning to effectively construct pipelines over a large collection of primitives. To better integrate the use of AutoML within the data science lifecycle, we have built an ecosystem of tools around AlphaD3M that support user-in-the loop tasks, including the selection of suitable pipelines and the development of solutions for complex systems. We present use cases that demonstrate some of these features. We report the results of detailed experimental evaluations which show that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance that is comparable or superior to state-of-the-art AutoML systems.
["AutoML", "Python Library", "Multiple ML Tasks"]
AlphaD3M: An Open-Source AutoML Libraryfor Multiple ML TasksRoque Lopez1Raoni Lourenço2Remi Rampin1Sonia Castelo1Aécio Santos1Jorge Ono1Claudio Silva1Juliana Freire11New York University2University of LuxembourgAbstract We present AlphaD3M, an open-source Python library that supports a wide range of machinelearning tasks over different data types. We discuss the challenges involved in supportingmultiple tasks and how AlphaD3M addresses them by combining deep reinforcement learningand meta-learning to construct pipelines over a large collection of primitives effectively.To better integrate the use of AutoML within the data science lifecycle, we have builtan ecosystem of tools around AlphaD3M that support user-in-the-loop tasks, includingselecting suitable pipelines and developing custom solutions for complex problems. Wepresent use cases that demonstrate some of these features. We report the results of adetailed experimental evaluation showing that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance comparable or superior tostate-of-the-art AutoML systems.1 IntroductionAutomated Machine Learning (AutoML) has emerged as an alternative to automatically synthesizemachine learning (ML) pipelines, thereby democratizing ML techniques to non-experts as wellas increasing the productivity of data scientists. Different approaches have been proposed forAutoML systems. Some focus on specific components of an ML pipeline, such as hyperparameteroptimization or model selection, while others, given a dataset and a prediction task, generateend-to-end pipelines that encompass data pre-processing, feature, and model selection (Hutteret al., 2019). Most end-to-end systems are designed to work with tabular data and only supportclassification and regression problems (Feurer et al., 2015; LeDell and Poirier, 2020; Olson and Moore,2016; Kotthoff et al., 2017). Cloud AutoML (Google Cloud AutoML, 2020) and AutoGluon (Ericksonet al., 2020) also create pipelines to classify text and images and perform object detection tasks.However, these systems do not support more complex data types such as graphs, time series, audio,and video, limiting the types of problems they can address. Table 1 shows the set of task typessupported by different AutoML systems.In the context of DARPA’s Data-Driven Discovery of Models (D3M) program (Elliott, 2020),several AutoML systems have been developed to support a wide range of data types and MLtasks using an extensive set of computational primitives as building blocks – we refer to theseasmulti-task AutoML systems (MT-AutoML). MT-AutoML systems face an essential challenge:effectively searching an ample space of primitives required to synthesize pipelines for a broadrange of tasks and data types. To prune the search space, many D3M MT-AutoML systems usemanually-crafted templates and grammars (D3M, 2022) that prescribe combinations of primitivesthat make sense for different problems. This, in turn, leads to other challenges: creating thesetemplates or grammars is not only time-consuming but failing to include the necessary rules thatcover the relevant primitives (and their combination) for multiple task types can negatively impactthe ability of an MT-AutoML system to derive performant pipelines.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Table 1: Tasks supported by different AutoML Systems.SystemsTabularClassificationTextclassificationImageclassificationAudioclassificationVideoclassificationTabularRegressionClusteringTime seriesforecastingTime seriesclassificationObjectdetectionLUPICommunitydetectionLinkpredictionGraphmatchingVertexclassificationCollaborativefilteringSemisupervisedclassificationAutoGluon ✓✓✓ ✓ ✓ ✓AutoWEKA ✓ ✓Auto-Sklearn ✓ ✓Cloud AutoML ✓✓✓ ✓✓ ✓H2O ✓✓ ✓TPOT ✓ ✓AlphaD3M ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓ ✓✓✓ ✓✓✓✓✓✓ ✓✓✓We present AlphaD3M, an open-source AutoML library1that supports a wide range of dataand problem types (see Table 1). AlphaD3M introduces new techniques to navigate the large searchspaces MT-AutoML systems must navigate effectively. They include an algorithm that appliesmeta-learning to automatically derive task-based context-free grammars (CFGs) which cover amultitude of problems; and a novel search strategy that, based on previously generated pipelinesand their performance, prioritizes primitives that are correlated with good pipeline performance.AlphaD3M includes components that aim to support usability and integration with other tasksin the data science lifecycle, from data exploration and model summarization to model deployment.It is possible to extend AlphaD3M and combine it with other tools through its flexible API. Forexample, its integration with the PipelineProfile (Ono et al., 2021) allows users to explore andcompare the set of derived pipelines visually. Besides describing the API and these components, wealso present case studies demonstrating how users can improve the ML solutions via interaction inAlphaD3M.We conducted a detailed experimental evaluation to assess the ability of AlphaD3M to handlea rich set of tasks and data types as well as to compare its performance against state-of-the-artAutoML and MT-AutoML systems. We used two benchmarks: (a) a collection of 112 datasetsthat covers seventeen different ML tasks, and (b) the OpenML AutoML Benchmark for tabularclassification problems. Our results show that the search strategies used by AlphaD3M are effective:the system generates pipelines whose performance is superior or on par with those derived byother systems, including systems that focus on a small set of problems and have to navigate a muchsmaller search space.2 Related WorkTask Coverage. Many AutoML systems have been proposed to work with tabular data, for example:Auto-sklearn (Feurer et al., 2015), TPOT (Olson and Moore, 2016), and H2O (LeDell and Poirier,2020). The deep reinforcement learning algorithm proposed by Drori et al. (2019) aimed to supportmultiple learning tasks and data types, however, its implementation was limited to classificationand regression tasks over tabular and text data. AutoML systems developed in industry, such asCloud AutoML by Google and AutoGluon by Amazon, handle text and image data, but still supporta limited number of learning tasks. In contrast, AlphaD3M supports a wide range of data types(tabular, text, images, audio, video, and graph) and a rich set of ML tasks as shown in Table 1.Data and Model Exploration. Interactive data analytics systems such as Visus (Santos et al., 2019),TwoRavens (Gil et al., 2019), and Snowcat (Cashman et al., 2018) have been developed to guideusers throughout the model-building process, from exploring the input data to comparing the MLpipelines produced by AutoML systems. They target primarily domain experts who have little or1https://gitlab.com/ViDA-NYU/d3m/alphad3m2no expertise in ML and thus lack support for the customization of pipelines for complex problems.These systems trade off flexibility for ease of use. As such, they are limited to the operationsimplemented in their visual interfaces; extensive and time-consuming changes in their workflowsare required to support new data types and tasks (e.g., graph data). Other approaches mimic theinterface of traditional ML libraries, through which developers often build a single solution for agiven task (Grafberger et al., 2021). AlphaD3M allows ML experts to explore the derived pipelinesand customize them through a user-friendly interface within a Jupyter Notebook environment. Inaddition, instead of retrieving only the best pipeline, AlphaD3M returns all valid pipelines, ranks,and presents them to the user for comparison, refinement, and selection.3 The AlphaD3M LibraryFigure 1: Overview of AlphaD3M.AlphaD3M is a multi-task Au-toML system. It is imple-mented in Python and canbe used via pipinstallationor Docker. Figure 1 showsan overview of this libraryand its components. Tobuild ML pipelines, AlphaD3Muses a rich set of primitivesand a meta-learning databasefrom the D3M ecosystem D3M(2022). The pipeline search is conducted by four modules which: (a) automatically construct oftask-specific grammars; (b) prioritize primitives that are more likely to be effective; (c) synthesizepipelines using Monte Carlo Tree Search and Neural Networks (Drori et al., 2019); and (d) tunehyperparameters. The library implements a Python API through which users can define the problemto be solved, explore the input data, obtain model summaries, analyze and compare the producedpipelines, as well as improve and deploy them.3.1 The D3M EcosystemPrimitives. AlphaD3M uses a comprehensive collection of primitives developed by performersin the D3M program as well as from open-source libraries (e.g., scikit-learn). In total, there are312 primitives available for different steps in ML pipelines, including data pre-processing, featureextraction, feature selection, prediction, and clustering (D3M Primitives, 2022), and implementstate-of-the-art methods, such as ResNet50 (He et al., 2016), ARIMA (Wilson, 2016), among others.The Marvin Meta-Learning Database. Marvin is an open corpus of curated ML pipelines, datasets,and problems (Marvin, 2020). All pipelines in Marvin share the same set of primitives and arespecified using the D3M format. Marvin stores approximately 2.5 million pipelines executed over600 datasets. Since data scientists and AutoML systems that use different search strategies haveproduced these pipelines, the database covers a wide variety of pipeline patterns. As discussedbelow, we leverage the data in Marvin to assist in and improve the AlphaD3M search process. Tothe best of our knowledge, ours is the first work that explores this corpus.3.2 Pipeline SearchThe automatic synthesis of pipelines is a combinatorial problem in which we must find the bestcombinations of primitives and their hyperparameters. With 312 primitives and over 1,500 hy-perparameters in the D3M ecosystem, the search space becomes prohibitively large. For instance,considering just the classification task over tabular data, there are 22 data cleaning, 87 data trans-formation, and 44 classifier primitives, leading to 84,216 possible pipelines to test. AlphaD3M usesa multi-pronged approach to manage this search space described below.3APipeline Synthesis Using Monte Carlo Tree Search and Neural Networks. To synthesize theML pipelines, AlphaD3M uses the strategy introduced by Drori et al. (2019), which is based on asingle-player game technique inspired by AlphaZero (Silver et al., 2017). It applies model-basedreinforcement learning with a neural network sequence model, and a Monte Carlo Tree Search(MCTS). The metadata encoding the pipeline, the dataset, and the task are analogous to an entiregame board configuration in AlphaZero. The possible game states consist of all valid pipelinesgenerated from a set of primitives and modified by actions guided by a manually-designed CFG.The model outputs a sequence of primitives. Pipelines are constructed by an LSTM. Given a state scomposed of a vector encoding the whole board configuration (dataset, task, pipeline), the neuralnetwork predicts the probabilities P(s,a)over actions afrom a state s. This process produces aset of action sequences Sthat describe a pipeline, which in turn solves task Ton datasetD. Thenetwork also outputs an estimate of pipeline performance v. The reinforcement learning algorithmtakes the predictions (P(s,a),v(s))produced be the neural network and uses them in the MCTS byrunning multiple simulations to search for the pipeline sequence Rwith the best evaluation. Animportant benefit of this strategy is that it learns to synthesize pipelines.BAutomatic Generation of Task-Based CFG via Meta-Learning. Manually designed CFGs havemany limitations, notably they may not cover all applicable rules and pipeline structures andconsequently prevent the search process from exploring desirable pipelines that do not fit thegrammar. Furthermore, to create the production rules or patterns in the grammar, a user needsto have knowledge of all the available primitives for a specific task and how they work. For largeprimitive collections, this is a difficult task, which is compounded for MT-AutoML systems thatsupport multiple problem types. Instead of relying on manually created CFGs, we propose a newstrategy that uses meta-learning to derive grammars automatically and on the fly. It does so in twosteps: 1) it selects task-specific pipelines and datasets from a meta-learning database (MLDB), and2) uses these to derive a portfolio of pipeline patterns.Selecting Task-Oriented Datasets. Since AlphaD3M supports different tasks, we need to retrievefrom the Marvin MLDB pipelines produced for tasks and datasets similar to the ones we provided asinputs to the AutoML system. For instance, if we want to solve a clustering problem over a datasetD, we retrieve the pipelines used for this problem over datasets similar to D. To select relevantpipelines for a given problem Pover dataset D, we use the “task keywords" tag list provided in theproblem definition as features that describe the task to be solved, and search Marvin for pipelinesthat contain a similar set of keywords. The list is encoded as a bag-of-words (BOW). Since the setis small and most of the tags are non-standard words, e.g., collaborativeFiltering, timeSeries , it ispossible to obtain accurate matches with this simple approach.Given the set of relevant pipelines RP, we select a subset RPDcontaining pipelines that wereapplied on datasets similar to D. To determine whether two datasets are similar, we use datasetfeatures including semantic types (e.g., categorical, date-time) and missing values, and encode themusing one-hot encoding. Datasets are compared using cosine similarity.The current implementation uses 16 unique semantic types detected by the data-mart_profiler (Datamart Profiler Library, 2021). In contrast to other approaches like TabSim(Habibi et al., 2020), or StruBERT (Trabelsi et al., 2022), AlphaD3M uses semantic types because, inthe grammar, it defines components to handle the dataset’s features, such as categorical or date-timeencoders, and these components are strongly related to semantic types. Also, these approachesfocus on tabular datasets, AlphaD3M handles other types of datasets, like image and text datasets.Finally, running these approaches is a very time-consuming task.Creating a Portfolio of Patterns. After identifying similar datasets, the next step is to select the bestpipelines to create a portfolio of pipeline patterns. To select these AlphaD3M takes into considerationpipeline performance for different datasets. Some datasets are more challenging than others – theperformance of a pipeline can vary widely for different datasets. To properly compare pipeline4performance, AlphaD3M uses a strategy based on the average distance to minimum (ADTM) (Wistubaet al., 2015), which transforms the performance to the distance to the best-observed performancescaled between 0 and 1. In contrast to ADTM, which uses the misclassification rate, AlphaD3Muses the actual performance (the score) of the pipelines and thus, it applies the average distance tomaximum instead to select the best pipelines. It then transforms the primitives within the pipelinesto their classes. For instance, the primitive imputer.SKlearn belongs to the class IMPUTATION . Ifthere is a pipeline with this structure: [ imputer.SKlearn svm.SKlearn ], it is converted to this pattern:[IMPUTATION CLASSIFICATION ]. Unlike Feurer et al. (2021), which creates a unique portfolioof pipelines in an offline phase, AlphaD3M creates the portfolio online, based on the query taskand dataset. Also, the output is a portfolio of patterns, not of static pipelines, which allows moreflexibility to construct pipelines. These patterns are used as production rules of the grammar.Algorithm 1 in the Appendix describes the process of building the grammar.CPrioritization of Primitives. When a data scientist builds an ML pipeline, they start this processusing primitives that are known to perform well. For example, XGBoost or Random Forests aregood initial candidates for classification tasks. AlphaD3M follows this intuition to identify goodcandidate primitives for a specific task, using the data from Marvin. This prior knowledge aboutpromising primitives can be helpful to find better pipelines faster.Similar to Ono et al. (2021), AlphaD3M uses Pearson Correlation (PC) to estimate how mucha primitive contributes to the score of the pipeline. However, instead of using the raw scores, ituses the ADTMs values because they are scaled across different datasets. AlphaD3M estimatesthe primitive importance using PC between the primitive indicator vector p(pi=1if pipelineicontains the primitive in question and pi=0otherwise) and the pipeline score vector s, wheresiisthe score for pipeline i. Sincepandsare dichotomous and quantitative variables, respectively, thePoint-Biserial Correlation coefficient (PBC) Sheskin (2003) is an appropriate correlation measure – itis mathematically equivalent to the PC but can be calculated with fewer operations. The correlationvalues are normalized between 0 and 1 (using min-max normalization).AlphaD3M calculates these correlations for the primitives at two levels: (a) global, when itconsiders all the pipelines, and (b) local, when it considers only the pipelines for each pattern.The main goal is to estimate how important a primitive is for all the pipelines and each pattern.Primitives with higher values of importance should have priority during the search of pipelines.Algorithm 2 describes the process of calculating the primitive importance values in detail (see theAppendix). To prioritize the usage of potential primitives in AlphaD3M, it includes these values ofprimitive importance in the MCTS formula:U(s,a)=Q(s,a)+c(αP(s,a)+( 1−α)R(a))√︁N(s)1+N(s,a)(1)whereQ(s,a)is the expected reward for action a(selection of primitive a) from state s,N(s,a)isthe number of times action awas taken from state s,N(s)is the number of times state swas visited.P(s,a)are the probabilities predicted by the neural network over actions afrom a state s,cis aconstant which determines the amount of exploration, R(a)=G(a)∗L(a),G(a)andL(a)are theglobal and local importance of the action a, andαis a coefficient to keep the trade-off betweenR(a)andP(s,a).DDecoupled Hyperparameter Tuning. Hyperparameter tuning is an essential part of fitting machinelearning models (Bergstra et al., 2011; Snoek et al., 2015; Dolatnia et al., 2016). This is also the casefor end-to-end ML pipelines that target different tasks, and all primitives contain hyperparameters,not just the estimators.AlphaD3M performs hyperparameter tuning as an independent task, after the pipelines areconstructed. It uses Bayesian optimization, which is the state-of-the-art for hyperparameter tuning5Figure 2: (a) A code snippet to solve a semi-supervised classification task. (b) AlphaD3M allows usersto inspect the contents of the input dataset, including column statistics and data types. (c)Analyzing ML pipelines through the integration with PipelineProfiler.(Bergstra and Bengio, 2012; Snoek et al., 2015; Dolatnia et al., 2016) and was shown to outperformmanual setting of parameters, grid search, and random search (Bergstra and Bengio, 2012; Turneret al., 2021).Tuning Top- kPipelines. AlphaD3M synthesizes and evaluates the pipelines using primitives withdefault values for hyperparameters. The pipelines are then ranked by performance, and the top-kpipelines are selected for tuning. AlphaD3M uses Sequential Model-Based Algorithm Configuration(SMAC) (Lindauer et al., 2022), a Python library for Bayesian optimization. It approximates aprobability model of the performance outcome given a parameter configuration that is updatedfrom a history of executions. AlphaD3M selects the Gaussian Processes models from SMAC tominimize an arbitrary acquisition function using the Expected Improvement criterion to choose theparameter values for each iteration until a condition (number of iterations) is met. The acquisitionfunction is designed to normalize the performance metric used to synthesize the pipelines betweenzero and one, as the pipeline execution evaluations increase, the acquisition function gets closer tozero. SMAC requires a set of unique parameters to assign values during its tuning procedure. SinceAlphaD3M considers multiple primitives with identical names, it constructs an internal hierarchicalnomenclature of parameters and designs their dependencies using ConfigSpace.3.3 The APIWe have developed a Python-based API that supports the process of building and exploration of MLpipelines within a Jupyter Notebook environment. The API is integrated with the D3M AutoMLsystems and supports various dataset formats such as raw CSV, D3M, and OpenML. Model synthesiscan be done with a few lines of code, as shown in Figure 2(a). The API allows users to (a) define aproblem, (b) explore summaries of their input dataset, (c) summarize the produced pipelines and (d)analyze and compare pipelines with respect to their performance scores and prediction outputs.We describe the main components of the API below.Problem Definition. To build a predictive model, AlphaD3M needs a problem specification thatdescribes a prediction problem, specifically: (a) the training dataset; (b) a target variable, i.e., whatshould be predicted by the predictive model; (c) the maximum running time that controls how longthe search can take (to control the use of computational resources); (d) the desired performancemetric; and (e) a list of task keywords that specify the kind of prediction task and, therefore, thetechniques that should be used to solve the prediction problem. Figure 2(a) shows an example ofhow to define a problem in AlphaD3M.6Table 2: Comparison of MT-AutoML systems with respect to the number of supported task types,winner pipelines, and average rank by each system.AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Unique ML tasks supported 17 16 15 17 15 16 14 2Winner pipelines 49 39 30 21 20 11 10 7Average rank 2.85 2.89 2.90 3.99 4.68 5.32 5.73 6.85Data Exploration. To build good predictive models, it is important to identify data attributes thatlead to accurate predictions. The API provides multiple tools for data exploration. For example, itshows different visualizations (compact, detail, and column views) that summarize the content oftabular datasets (see Figure 2 (b)).Pipeline Summary. After the pipeline search is complete, users can display a leaderboard, trainindividual pipelines with the complete data, perform predictions and evaluate them against aheld-out dataset.Pipeline Exploration. Users can analyze the produced pipelines using the PipelineProfiler Onoet al. (2021), which is fully integrated into AlphaD3M as shown in Figure 2(c). PipelineProfiler isa visual analytics tool that enables users to compare and explore the pipelines generated by theAutoML systems.Pipeline Refinement and Deployment. AlphaD3M allows users to save and load pipelines, enablingusers to reload them later and perform analyses without having to re-run the AutoML search.They can load the saved pipelines at any time for training or testing purposes. In addition, userscan export pipelines to Python code. This gives them more control and the ability to modify(and customize) the automatically generated pipelines (e.g., change hyperparameters, or replacea classifier primitive). More information about the API can be found on the documentation webpage: https://alphad3m.readthedocs.io/en/latest/api.html .4 EvaluationTo demonstrate the effectiveness of AlphaD3M and its ability to handle a rich set of ML tasks, wecompared AlphaD3M with state-of-the-art AutoML systems using two dataset collections. We alsopresent use cases to show how useful, flexible, and easy to use AlphaD3M is.4.1 Comparing AutoML SystemsD3M Datasets. This collection contains challenging datasets and cover a wide variety of tasks (atotal of 17 task types) and data types (see Table 3). We evaluated all the systems using train and testsplits. In most of the cases, the sizes are 0.8 and 0.2 for the train and test splits, respectively (see thedataset’s repository2for details). For each dataset, we ran the systems over the train split for onehour, a time-bound used by others works (Erickson et al., 2020; Feurer et al., 2021). After that, weevaluated the best pipeline produced by each system in the test split. For this experiment, we used1 GPU (GeForce GTX 1080 Ti), 14 CPU cores (Intel Xeon E5-2695 v4, 2.10 GHz), and 56 GB memory.Table 2 shows the number of supported task types (ML tasks), winner pipelines (i.e., pipelineswith the best performance for a given dataset), and the average rank by each AutoML system (rankof each system among the 8 AutoML systems applied to each dataset). If two or more systemsproduce pipelines that tie in the best score, all of them are considered winner pipelines. As we cansee, AlphaD3M and Aika were able to solve 17 out of 17 unique tasks, obtaining the best coverage.We also evaluated the effectiveness of AlphaD3M. It had the best overall performance, producingthe best pipeline for 49 datasets with the best average rank (2.85). Analyzing the support for each2https://datasets.datadrivendiscovery.org/d3m/datasets7Table 3: Number of datasets by task type and number of solved datasets by each AutoML system forall task types covered by the D3M datasets.ML Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Tabular Classification (20) 20 19 18 20 18 17 13 20Tabular Regression (11) 11 11 11 8 9 6 5 9Image Classification (9) 9 8 9 9 7 7 2 0Image Regression (1) 1 1 1 1 1 1 1 0Text Classification (9) 9 9 9 9 8 8 9 0Audio Classification (2) 2 2 2 2 1 2 2 0Graph Matching (3) 3 3 3 3 2 2 2 0Time series Forecasting (13) 13 13 13 13 2 12 10 0Link Prediction (3) 3 3 3 3 2 2 2 0Collaborative Filtering (1) 1 0 1 1 0 1 0 0Time series Classification (19) 19 19 19 17 19 15 19 0Community Detection (3) 3 3 0 2 2 1 0 0Video Classification (2) 2 2 2 2 0 2 2 0Vertex Classification (4) 4 4 4 4 4 4 4 0Object Detection (2) 2 2 0 1 1 0 0 0Semisupervised Classification (6) 6 6 6 3 6 4 3 0LUPI (4) 4 4 4 4 4 4 4 0task type individually in Table 3, we can see that AlphaD3M was able to produce valid pipelinesfor all the datasets and it solved more datasets than the other systems. Even though AlphaD3M isinspired by Drori et al. (2019), in Table Table 2 and Table 3, we can clearly see the difference betweenthem, AlphaD3M handles a larger number of tasks and produces many more winned pipelines.This shows that the different components of AlphaD3M are effective at handling the larger searchspaces required by MT-AutoML systems. The detailed scores obtained by each system in all theD3M datasets and the average rank by tasks can be found in Table 4 and Table 5 (Appendix).Additionally, we calculated the number of winner pipelines for the top-3 systems only in thedatasets where all of them produced pipelines. AlphaD3M, Ensemble, and AutonML systems got 48,42, and 38, respectively. These results confirm that the superior performance of AlphaD3M is notsolely due to its support for a broader range of ML tasks.Figure 3: Ablation study for the different components of AlphaD3M.We performed an ablationstudy to analyze the contribu-tion of each component of Al-phaD3M on a random sample offive D3M datasets for classifica-tion tasks2(datasets for whichAlphaD3M obtained the best, av-erage and worst performances).Figure 3 shows the best scoresfor each dataset reached by thefull AlphaD3M and the versionswith some components removed(or replaced). As we can see, us-ing all components leads to thebest results.To evaluate the importance of the automatic grammar, we replaced it with the manually-designed grammar used in Drori et al. (2019). For POKER ,SPECTRO ,WORDS , and SICK datasets,when the manual grammar was used, AlphaD3M was not able to produce valid pipelines, whichhighlights the importance of automatically generating the grammar. These datasets contain multi-ple types of features like text, DateTime, etc., which were not covered by the manually-constructed8Figure 4: Performance of AutoML systems in OpenML Benchmark. X-axis shows the accuracy values(normalized by the best score), and Y-axis shows the IDs of the OpenML tasks.grammar. The prioritization of primitives also plays an important role in AlphaD3M. When thisfeature was not used, the performance decreased, e.g. in POKER ,SPECTRO , and LIBRAS datasets. Aswe can see in Figure 3, in most of the datasets, when we removed the hyperparameter tuning com-ponent, AlphaD3M obtained the same results. This suggests that the heuristic used by AlphaD3M(tuning only the top- kpipelines) may miss good pipelines that would attain better performanceafter tuning. In future work, we plan to investigate alternative strategies for hyperparameter tuningthat attain a better balance of computational cost and pipeline performance.OpenML Benchmark. Similar to Erickson et al. (2020), we compared our system with AutoWEKA,TPOT, H2O, AutoGluon, and Auto-Sklearn 2.0 (hereinafter referred to as Auto-Sklearn) on the 39OpenML datasets (Gijsbers et al., 2019). This corpus contains a variety of datasets intended torepresent real-world data science problems and covers binary and multiclass classification tasks.We used AMLB (Gijsbers et al., 2022) to compare the systems, running them locally for one hourusing 1 fold split and accuracy as the optimization metric. For this experiment, we used 4 CPUcores (Intel Xeon Platinum 8268 Processor, 2.9 GHz) and 32 GB memory.Figure 4 shows the scores (normalized by the best score) of all the systems (the detailed scorescan be found in Tables 6 and 7 in the Appendix). As we can see, AlphaD3M produced pipelineswhose performance is on par with the other AutoML systems. We also calculated the averagerank for all the systems for the 39 datasets. AlphaD3M got 3.64 of average rank, while Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA got 2.08, 2.33, 3.08, 3.72, and 5.10, respectively.To understand better these numbers, we also estimated the performance gain of the pipelines foundby AlphaD3M against pipelines generated by other systems. The average gain of AlphaD3M forthe OpenML datasets was +0.001, which shows that, in general, AlphaD3M attained good resultsfor this collection. We analyzed the 3 datasets ( task_146195 ,task_167119 andtask_168331 ) forwhich AlphaD3M generated pipelines with performance lower than other systems. This happenedbecause these datasets are imbalanced with multiple classes. The performance of AlphaD3M forthese could be improved with the inclusion of primitives to handle imbalanced datasets. Thisunderscores the importance of being able to add primitives to AutoML systems.Concerning the coverage, it is important to highlight that AlphaD3M succeeded for 38 datasets.Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA solved 39, 39, 34, 29, and 28 datasets,respectively. As pointed out by Gijsbers et al. (2022), the results of Auto-Sklearn on the OpenMLdatasets must be considered very carefully, since there could be an overlap between the datasetsused in its meta-learning process and the ones used in the evaluation. It’s important to highlightthat none of the OpenML datasets are included in the version of Marvin that was used by AlphaD3Min these experiments.94.2 Use CasesPivoting across ML tasks. Predicting hostile actions against ships and mariners worldwide isimportant to prevent piracy and prosecute the aggressors. Consider that an analyst from the U.S.National Geospatial-Intelligence Agency (NGA) is building a model using the Anti-Shipping ActivityMessages dataset (ASAM, 2021). She wants to identify which records mention guns and whichrecords do not. This is a non-trivial problem since a variety of terms (e.g., pistol, rifle, etc.) indicatewhether a gun is present. This dataset contains 8,000 documents, of which 1,400 were annotated.She started by using AlphaD3M to create models using the 1,400 labeled documents setting themodel search to 1 hour. AlphaD3M derived high-quality pipelines – the best pipeline had 0.90 ofF1. However, she wondered whether these pipelines could be further improved, in particular, byleveraging the 6,600 unlabeled documents through semi-supervised learning. AlphaD3M supportsa wide range of tasks, including semi-supervised learning – users just need to add the keyword“semiSupervised” as a parameter. The user then ran a new experiment using the 1,400 labeled and6,000 unlabeled instances as a training dataset. The results improved from 0.90 to 0.95 of F1. Theseexperiments show that by using AlphaD3M, data scientists can improve the results, pivoting fromone task (classification) to another (semi-supervised classification) very quickly.Reducing pipeline execution time through models exploration. Using content analysis andpredictive modeling for conflict assessment is a common approach for conflict analysts to guidepolicy-making decisions D’Orazio (2020). Consider a conflict analyst trying to categorize explosionevents that involve terrorist activities. She uses the explosion events dataset (Raleigh et al., 2010)that contains 20,000 articles describing events that involve terrorist activities. An article is relevantif it describes attacks involving explosions. To create classification models, she ran AlphaD3M for 1hour. The system synthesized high-quality pipelines, with F1 values around 0.9. To identify themost suitable pipeline, she used the PipelineProfiler to explore the derived models. She observedthat the top-10 pipelines had similar scores but their execution times were above 800 seconds. Toaddress this problem, she tried a different strategy: combining progressive sampling and activelearning to reduce the number of training data from 20,000 to 3,200 documents. Then, she re-ranAlphaD3M using the smaller set as the training dataset, while keeping the rest of the workflowunchanged. The top F1 score improved from 0.91 to 0.96 and the time from 800 to 125 seconds.5 ConclusionsWe introduced AlphaD3M, an MT-AutoML library that automatically synthesizes end-to-endpipelines for 17 ML tasks and 6 different data types. AlphaD3M introduces new methods to auto-matically derive grammars and prioritize primitives, which are essential for effectively managingthe large space MT-AutoML systems must search. In addition, AlphaD3M embraces a user-in-the-loop approach, through an API that allows the users to explore the input data and the derived MLpipelines, as well as customized the pipelines. We presented a detailed experimental evaluationthat compares our approach to several state-of-the-art AutoML systems over different problemsand datasets. The results suggest that AlphaD3M is effective: not only does it solve a larger numberof problem types, but it also derives pipelines with performance that is superior or on par withthose derived by other systems.Although AlphaD3M’s approach is primitive-agnostic, so far, it only relies on the D3M primitivesto build ML pipelines. We plan to extend AlphaD3M by including additional state-of-the-artand more-recent primitives, e.g., models published in HuggingFace or PyTorch Hub repositories.Moreover, we would like to improve the system interoperability with existing open-source primitivesthat use standard APIs such as the well-known scikit-learn’s fit-predict API.Acknowledgements. This work was partially supported by the DARPA D3M program. Anyopinions, findings, conclusions, or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of DARPA.10ReferencesASAM (2021). ASAM: Anti-Shipping Activity Messages. https://msi.nga.mil/Piracy .Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for Hyper-Parameter Opti-mization. In Proceedings of NIPS , pages 2546–2554.Bergstra, J. and Bengio, Y. (2012). Random Search for Hyper-parameter Optimization. JMLR , pages281–305.Cashman, D., Humayoun, S. R., Heimerl, F., Park, K., Das, S., Thompson, J., Saket, B., Mosca, A.,Stasko, J. T., Endert, A., Gleicher, M., and Chang, R. (2018). Visual Analytics for AutomatedModel Discovery. CoRR .D3M (2022). D3M Website. https://datadrivendiscovery.org .D3M Primitives (2022). D3M Primitives Website. https://gitlab.com/datadrivendiscovery/primitives/-/tree/master/primitives .Datamart Profiler Library (2021). Datamart Profiler Website. https://pypi.org/project/datamart-profiler/ .Dolatnia, N., Fern, A., and Fern, X. (2016). Bayesian Optimization with Resource Constraints andProduction. In Proceedings of ICAPS , pages 115–123.D’Orazio, V. (2020). Conflict Forecasting and Prediction. In Oxford Research Encyclopedia ofInternational Studies . Oxford University Press.Drori, I., Krishnamurthy, Y., Lourenco, R., Rampin, R., Cho, K., Silva, C., and Freire, J. (2019).Automatic Machine Learning by Pipeline Synthesis using Model-based Reinforcement Learningand a Grammar. In 6th ICML Workshop on Automated Machine Learning .Elliott, J. (2020). DARPA Data-Driven Discovery of Models (D3M) Program. https://www.darpa.mil/program/data-driven-discovery-of-models .Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 .Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2021). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M.,and Garnett, R., editors, Advances in Neural Information Processing Systems , volume 28. CurranAssociates, Inc.Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren,J. (2022). Amlb: an automl benchmark.Gijsbers, P., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J. (2019). An Open SourceAutoML Benchmark. In 6th ICML Workshop on Automated Machine Learning .Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., Gadewar, S., Yang, Q., and Jahanshad, N.(2019). Towards Human-guided Machine Learning. In Proceedings of the Conference on IntelligentUser Interfaces (IUI) , pages 614–624. ACM.11Google Cloud AutoML (2020). Google Cloud AutoML Website. https://cloud.google.com/automl .Grafberger, S., Guha, S., Stoyanovich, J., and Schelter, S. (2021). MLINSPECT: a Data DistributionDebugger for Machine Learning Pipelines. age, 20:123.Habibi, M., Starlinger, J., and Leser, U. (2020). Tabsim: A Siamese Neural Network for AccurateEstimation of Table Similarity. In 2020 IEEE International Conference on Big Data (Big Data) ,pages 930–937. IEEE.He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778.Hutter, F., Kotthoff, L., and Vanschoren, J. (2019). Automated Machine Learning: Methods, Systems,Challenges . Springer.Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., and Leyton-Brown, K. (2017). Auto-WEKA 2.0:Automatic Model Selection and Hyperparameter Optimization in WEKA. The Journal of MachineLearning Research , 18(1).LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable Automatic Machine Learning. 7th ICMLWorkshop on Automated Machine Learning (AutoML) .Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Ruhkopf, T.,Sass, R., and Hutter, F. (2022). Smac3: A versatile bayesian optimization package for hyperpa-rameter optimization. Journal of Machine Learning Research , 23(54):1–9.Marvin (2020). Marvin Website. https://datadrivendiscovery.org/marvin .Olson, R. S. and Moore, J. H. (2016). TPOT: A Tree-based Pipeline Optimization Tool for AutomatingMachine Learning. In ICML AutoML Workshop , pages 66–74.Ono, J. P., Castelo, S., López, R., Bertini, E., Freire, J., and Silva, C. T. (2021). PipelineProfiler: AVisual Analytics Tool for the Exploration of AutoML Pipelines. IEEE Transactions on Visualizationand Computer Graphics , 27:390–400.Raleigh, C., Linke, A., Hegre, H., and Karlsen, J. (2010). Introducing ACLED: An Armed ConflictLocation and Event Dataset: Special Data Feature. Journal of peace research , 47(5):651–660.Santos, A., Castelo, S., Felix, C., Ono, J. P., Yu, B., Hong, S. R., Silva, C. T., Bertini, E., and Freire,J. (2019). Visus: An Interactive System for Automatic Machine Learning Model Building andCuration. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) , pages1–7. Association for Computing Machinery.Sheskin, D. J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures . crc Press.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,Kumaran, D., Graepel, T., et al. (2017). Mastering Chess and Shogi by Self-Play with a GeneralReinforcement Learning Algorithm. Conference on Neural Information Processing Systems .Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M. M. A., Prabhat,P., and Adams, R. P. (2015). Scalable Bayesian Optimization Using Deep Neural Networks. InProceedings of the ICML , pages 2171–2180.Trabelsi, M., Chen, Z., Zhang, S., Davison, B. D., and Heflin, J. (2022). StruBERT: Structure-awareBERT for Table Search and Matching. arXiv preprint arXiv:2203.14278 .12Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2021). BayesianOptimization is Superior to Random Search for Machine Learning Hyperparameter Tuning:Analysis of the Black-Box Optimization Challenge 2020. CoRR , abs/2104.10201.Wilson, G. T. (2016). Time Series Analysis: Forecasting and Control, 5th Edition. Journal of TimeSeries Analysis , 37(5):709–711.Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015). Learning Hyperparameter OptimizationInitializations. In 2015 IEEE international conference on data science and advanced analytics(DSAA) , pages 1–10. IEEE.13A Broader Impact StatementAlphaD3M can potentially strengthen the efforts in democratizing data science by broadening theapplication of automated predictive pipelines. Subject experts can create their own pipelines andexplore them in the context of an ethical framework. Its interoperable software infrastructureenables external auditing and improves the trust and interpretability of synthesized pipelines.The search space management mechanism also allows efficient resource allocation and helps toprototype pipelines before performing high energy-consuming model training.B Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See it mainly in Section 3 and 4.(b)Did you describe the limitations of your work? [Yes] See Section 5. We also discuss theinfeasibility of AutoML system in general, and our efforts to mitigate limitations.(c)Did you discuss any potential negative societal impacts of your work? [No] However, weadvocate for the necessity of human-in-the-loop to build trust in the generated pipelines.(d)Have you read the ethics review guidelines and ensured that your paper conforms to them?https://automl.cc/ethics-accessibility/ [Yes] Our paper follows these guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We are not includingtheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We are not includingtheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We provide a link to our public GitLab repository and documentationwebpage, where users can find information about the installation and instructions to runour system. The reported evaluation was conducted by a third (independent) party in acompetition among AutoML systems, so we can not release that code.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] See the scripts/paper_automlconference folder in our repository.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Seethescripts/paper_automlconference folder in our repository.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] Our code is well documented and follows codingstandards and best practices. We provide different Jupyter notebook examples and an APIto show how to use AlphaD3M.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [No] We do not specify allthe details.14However, some details, like the data split and search spaces are publicly available in thereferences.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] See Section 4.1.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 4.1.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Wepresented two comparisons (see Section 4). For the first comparison, we used the sameprotocol. For the second one, we used an existing asset and we evaluated our system usingthe same time protocol.(i)Did you compare performance over time? [No] We ran the systems during one hour, atime-bound used by others works (Erickson et al., 2020; Feurer et al., 2021), and reportedthe best score during this time.(j)Did you perform multiple runs of your experiments and report random seeds? [N/A] Wedo not perform multiple runs of our experiments.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not report error bars.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We did notuse surrogate benchmarks.(m)Did you include the total amount of compute and the type of resources used (e.g., typeofgpus, internal cluster, or cloud provider)? [No] Some of the reported evaluations wereconducted by a third party.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] The hyperparameters were automaticallytuned by our AutoML engine.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4.1.(b)Did you mention the license of the assets? [No] However, all assets are publicly availableand the licenses can be retrieved from the references.(c)Did you include any new assets either in the supplemental material or as a url? [Yes] Weincluded a urlto the data used in the experiments.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The assets used in this paper are publicly available.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The data used do not contain personally identifiableinformation neither offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not carry out a user study.15(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not carry out a user study.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not carry out a user study.C Additional DetailsC.1 AlgorithmsAlgorithm 1 describes the process of building the grammar. getVectorTK andgetVectorST repre-sent the BOW and one-hot encoding functions, respectively. The best values empirically calculatedfor the thresholds tsimandtperfare 0.8 and 0.5, respectively.Algorithm 1 Grammar BuilderInput: Marvin datasets D, query dataset q, thresholdtInitializeS=[]// Similar datasetsfordiinDdosimTK =cosineSimilarity(getVectorTK(di),getVectorTK(q))ifsimTK >tsimthensimST =cosineSimilarity(getVectorST(di),getVectorST(q))ifsimST >tsimthenAddditoSInitializeP=calculateADTM(S)InitializeR=[]// Production RulesforpiinPdoifperformance(pi)>tperfthenri=convertToPattern(pi))AddritoRreturnRAlgorithm 2 describes the process of calculating the primitive importance values in detail. Forinstance, the primitive importance values calculated for XGBoost and Random Forrest are 0.62 and0.56, whereas for Nearest Centroid and K-Nearest Neighbors the values are 0.46 and 0.44. It showsthat the importance values can be used as an indicator to prioritize the usage of primitives.Algorithm 2 Primitives ImportanceInput: PipelinesP, PatternsTInitializeR=getPrimitives(P)InitializeG,L=[]// Global and Local correlationsforriinRdopc=PearsonCorrelation (ri,P)npc=normalize(pc)AddnpctoGfortiinTdopi=getPipelines(ti,P)R=getPrimitives(ti,pi)forriinRdopc=PearsonCorrelation (ri,R)npc=normalize(pc)AddnpctoLreturn(G,L)16C.2 GrammarsDifferent tasks require different grammars. For instance, the algorithms needed to solve time-series and semi-supervised classification problems have a different structure and use a differentset of primitives. Consequently, specialized grammars and production rules are needed for eachtask. Manually creating these grammars is time-consuming and error-prone, and relying on thesegrammars can limit the effectiveness of the AutoML systems with respect to problem coverage andquality of the derived pipelines.Figure 5 shows an excerpt of a grammar automatically generated in AlphaD3M to solve classi-fication problems. The start symbol ( S) is the starting point from which all the production rulescan be derived. In the grammar, the terminal ‘primitive’ can be any of the available algorithms inAlphaD3M, and ‘E’represents the empty symbol.S ::= CATEGORICAL_ENCODER TEXT_FEATURIZER DATA_CONVERSION IMPUTATION CLASSIFICATIONS ::= TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER DIMENSIONALITY_REDUCTION CLASSIFICATIONS ::= DATA_STRUCTURE_ALIGNMENT IMPUTATION CLASSIFICATIONS ::= IMPUTATION FEATURE_SCALING CLASSIFICATIONS ::= IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION DIMENSIONALITY_REDUCTION CLASSIFICATIONIMPUTATION ::= 'primitive '|'E'CATEGORICAL_ENCODER ::= 'primitive '|'E'FEATURE_SCALING ::= 'primitive '|'E'FEATURE_SELECTION ::= 'primitive '|'E'DIMENSIONALITY_REDUCTION ::= 'primitive '|'E'DATA_CONVERSION ::= 'primitive 'TEXT_FEATURIZER ::= 'primitive 'DATA_STRUCTURE_ALIGNMENT ::= 'primitive 'CLASSIFICATION ::= 'primitive 'Figure 5: Excerpt of a grammar automatically generated by AlphaD3M for classification tasksIn Figure 6, you can see the manual grammar used in the experiments. This grammar wasproposed by Drori et al. (2019). To generate this grammar for classification and regression tabulartasks, a developer was asked to review manually the primitives to group them into categories. Forinstance, the primitives decision _tree.SKlearn andrandom _forest.SKlearn were grouped into thecategory ‘CLASSIFICATION’. Then, using his knowledge in ML, he created the production rules ofthe grammar using these categories.S ::= CLASSIFICATION_TASK | REGRESSION_TASKCLASSIFICATION_TASK ::= CLASSIFICATION | DATA_CLEANING CLASSIFICATION | DATA_TRANSFORMATION CLASSIFICATION |DATA_CLEANING DATA_TRANSFORMATION CLASSIFICATIONREGRESSION_TASK ::= REGRESSION | DATA_CLEANING REGRESSION | DATA_TRANSFORMATION REGRESSION |DATA_CLEANING DATA_TRANSFORMATION REGRESSIONCLASSIFICATION ::= 'primitive 'REGRESSION ::= 'primitive 'DATA_CLEANING ::= 'primitive 'DATA_CLEANING | 'E'DATA_TRANSFORMATION ::= 'primitive 'DATA_TRANSFORMATION | 'E'Figure 6: Manual GrammarC.3 ExperimentsIn Table 4, we can see the scores obtained by all AutoML systems developed in the D3M program,including a majority voting ensemble system, on a collection of 112 datasets2. This collection17contains challenging datasets that go beyond the simple tabular data and cover a wide variety oftasks and data types.Table 4: Scores obtained by AlphaD3M and the other AutoML systems developed in the D3M program.Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori124_120_mnist_8747 0.98 0.94 0.46 0.18 0.94 0.11 - -124_138_cifar100_1858 0.67 0.48 0.42 0.12 0.48 0.01 - -124_16_fashion_mnist 0.90 0.83 0.84 0.12 0.85 0.10 - -124_174_cifar10_MIN 0.88 0.82 0.84 0.27 0.80 0.10 - -124_188_usps_MIN 0.96 0.95 0.94 0.26 0.92 0.18 0.11 -124_214_coil20_MIN 0.99 0.99 0.99 0.85 0.97 - - -124_95_uc_merced_land_use_MIN 0.90 - 0.72 0.52 - 0.05 0.33 -1491_one_hundred_plants_margin_MIN 0.80 0.79 0.88 0.92 0.75 0.83 0.81 0.831567_poker_hand_MIN 0.90 0.84 0.28 0.48 0.12 0.13 - 0.27185_baseball_MIN 0.66 0.70 0.65 0.68 0.68 0.67 0.66 0.64196_autoMpg_MIN 6.57 9.12 5.74 11.95 7.49 6.01 15.36 7.0322_handgeometry_MIN 0.24 0.23 0.23 0.14 0.80 0.36 0.36 -26_radon_seed_MIN 0.02 0.02 0.24 0.03 0.02 0.06 1.40 0.0227_wordLevels_MIN 0.32 0.28 0.28 0.32 0.29 0.27 0.26 0.27299_libras_move_MIN 0.98 - - 0.48 - - 0.98 0.9730_personae_MIN 0.62 0.65 0.65 0.62 0.61 0.55 0.61 -313_spectrometer_MIN 0.43 0.37 0.37 0.30 0.32 0.33 0.23 0.4031_urbansound_MIN 0.93 0.93 0.91 0.75 0.92 0.77 0.49 -32_fma_MIN 0.55 0.57 0.34 0.28 - 0.11 0.11 -32_wikiqa_MIN 0.00 0.02 0.14 0.13 0.50 - 0.13 -38_sick_MIN 1.00 1.00 - 1.00 - - 0.49 1.004550_MiceProtein_MIN 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.0049_facebook_MIN 0.88 0.87 0.87 0.87 0.87 0.88 0.44 -534_cps_85_wages_MIN 20.11 20.35 22.07 23.15 24.86 21.44 - 20.7056_sunspots_MIN 34.55 11.82 8.64 8.45 58.30 9.40 90.60 -56_sunspots_monthly_MIN 64.61 41.18 46.86 41.04 - 62.20 27.74 -57_hypothyroid_MIN 0.96 0.98 0.99 0.98 0.74 0.99 0.97 0.9859_LP_karate_MIN 0.93 0.45 0.83 0.83 0.45 0.45 0.93 -59_umls_MIN 0.92 0.94 0.94 0.94 0.94 0.70 0.73 -60_jester_MIN 4.25 - 4.24 4.15 - 4.51 - -66_chlorineConcentration_MIN 0.82 0.86 0.81 0.52 0.78 0.68 0.23 -6_70_com_amazon_MIN 0.85 0.85 - 0.85 0.85 - - -6_86_com_DBLP_MIN 0.72 0.72 - 0.72 0.72 - - -JIDO_SOHR_Articles_1061 0.98 0.94 0.94 0.81 0.56 0.60 0.64 -JIDO_SOHR_Tab_Articles_8569 1.00 0.99 1.00 1.00 0.56 1.00 1.00 -LL0_1100_popularkids_MIN 0.42 0.45 0.38 0.38 0.40 0.44 - 0.47LL0_186_braziltourism_MIN 0.14 0.35 0.36 0.17 0.24 0.20 0.34 0.16LL0_207_autoPrice_MIN 4.89·1065.76·1066.04·1063.76·1075.36·1065.43·1061.56·1085.81·106LL0_acled_reduced_MIN 0.83 0.88 0.89 0.84 0.91 0.85 0.74 0.91LL0_jido_reduced_MIN 0.90 0.89 0.91 0.90 0.90 0.90 - 0.90LL1_2734_CLIR 0.88 0.50 0.52 0.88 - - 0.50 -LL1_336_MS_Geolife_transport_MIN 0.60 1.00 0.99 - 0.85 - 0.98 -LL1_336_MS_Geolife_transport_separate 0.67 1.00 0.99 - 0.86 - 0.99 -LL1_3476_HMDB_actio_recognition_MIN 0.11 1.00 0.90 0.11 - 0.48 0.08 -LL1_50words_MIN 0.35 0.55 0.56 0.41 0.51 0.45 0.35 -LL1_726_TIDY_GPS_carpool 0.54 0.58 0.58 0.46 0.59 - 0.63 -LL1_736_population_spawn_MIN 1636.12 1806.40 1804.76 1644.26 - 2845.89 - -LL1_736_population_spawn_simpler_MIN 1346.10 1490.15 3669.54 1347.65 1323.72 1550.40 19887.20 -LL1_736_stock_market_MIN 7.64 1.49 8.69 1.75 - 30.66 - -LL1_ACLED_TOR_online_behavior_MIN 0.40 0.05 0.44 0.64 0.43 0.66 0.08 0.40LL1_Adiac_MIN 0.75 0.70 0.73 0.54 0.67 0.70 0.49 -LL1_ArrowHead_MIN 0.75 0.82 0.78 0.72 0.65 0.55 0.72 -LL1_CONFLICT_3457_atrocity 9.53 6.75 11.43 12.84 - 17.21 13.91 -LL1_Cricket_Y_MIN 0.52 0.54 0.59 0.52 0.62 0.53 0.45 -LL1_DIC28_net_MIN 0.84 0.80 0.80 0.80 0.80 0.84 - -LL1_ECG200_MIN 0.90 0.87 0.87 0.86 0.91 0.85 0.86 -LL1_EDGELIST_net_nomination_MIN 0.99 0.66 0.85 0.94 0.66 0.35 0.84 -LL1_ElectricDevices_MIN 0.54 0.42 0.46 0.06 0.44 0.27 0.31 -LL1_FISH_MIN 0.80 0.87 0.89 0.73 0.84 0.86 0.78 -LL1_FaceFour_MIN 0.84 0.83 0.71 0.55 0.65 0.40 0.66 -18(Table 4: Continued from the previous page)Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriLL1_GS_process_classification_tab_MIN 0.80 0.80 0.80 0.80 0.80 0.73 - 0.81LL1_GS_process_classification_text_MIN 0.65 0.80 0.65 0.80 0.80 0.76 0.80 -LL1_GT_actor_group_association_MIN 0.25 0.13 0.17 0.13 - - - -LL1_HandOutlines_MIN 0.89 0.91 0.90 0.88 0.88 0.88 0.88 -LL1_Haptics_MIN 0.43 0.42 0.44 0.42 0.41 0.45 0.42 -LL1_ItalyPowerDemand_MIN 0.93 0.95 0.95 0.95 0.95 0.91 0.90 -LL1_MIL_MUSK 0.68 0.77 0.83 0.67 0.80 0.80 - 0.72LL1_MIL_Mutagenesis 0.80 0.73 0.72 0.71 0.70 0.63 - 0.79LL1_MITLL_synthetic_vora_E_2538 0.29 0.53 0.52 0.50 0.31 0.44 - 0.38LL1_Meat_MIN 0.95 0.94 0.88 0.92 0.88 0.17 0.95 -LL1_OSULeaf_MIN 0.53 0.44 0.52 0.77 0.45 0.47 0.32 -LL1_PHEM_Monthly_Malnutrition_MIN 10.63 9.56 9.39 9.73 - 12.18 - -LL1_PHEM_weekly_malnutrition_MIN 3.34 4.32 3.45 2.94 - 4.23 4.18 -LL1_TXT_CLS_3746_newsgroup_MIN 0.60 0.46 0.55 0.48 0.60 0.45 0.23 -LL1_TXT_CLS_SST_Binary 0.73 0.82 0.82 0.55 - 0.51 0.53 -LL1_TXT_CLS_airline_opinion_MIN 0.81 0.80 0.81 0.80 0.81 0.72 0.72 -LL1_TXT_CLS_apple_products_sent_MIN 0.73 0.71 0.72 0.72 0.73 0.66 0.69 -LL1_VID_UCF11_MIN 0.99 0.99 0.25 0.27 - 0.02 0.08 -LL1_VTXC_1343_cora_MIN 0.61 0.04 0.22 0.17 0.04 0.13 0.52 -LL1_VTXC_1369_synthetic_MIN 0.95 0.22 0.33 0.21 0.22 0.19 0.48 -LL1_ViEWS_CM_S1 0.69 1.20 0.90 0.72 0.75 2.52 - 0.82LL1_ViEWS_PGM_S1 0.02 0.04 0.02 - 0.02 0.02 0.30 0.02LL1_bigearth_landuse_detection 0.90 0.96 0.76 0.65 0.21 - - -LL1_bn_fly_drosophila_medulla_net_MIN 0.24 0.24 - - - 0.19 - -LL1_h1b_visa_apps_7480 0.44 0.47 0.43 0.44 0.41 0.41 0.47 0.42LL1_net_nomination_seed_MIN 0.99 0.99 0.96 0.94 0.99 0.34 0.46 -LL1_penn_fudan_pedestrian_MIN 0.94 0.94 - 0.94 0.94 - - -LL1_retail_sales_total_MIN 1989.19 1921.54 1941.06 1966.30 1992.17 - 1971.76 2022.41LL1_terra_canopy_height_s4_100_MIN 113.04 68.44 39.02 52.21 - 79.86 343.27 -LL1_terra_canopy_height_s4_70_MIN 104.92 547.94 126.06 136.32 - 169.63 136.98 -LL1_terra_canopy_height_s4_80_MIN 112.95 92.95 32.57 74.59 - 111.49 74.54 -LL1_terra_canopy_height_s4_90_MIN 117.13 85.73 35.12 60.44 - 104.49 60.45 -LL1_terra_leaf_angle_mean_s4_MIN 0.04 0.09 0.05 0.04 - - 0.05 -LL1_tidy_terra_panicle_detection_MIN 0.01 0.03 - - - - - -SEMI_1040_sylva_prior_MIN 0.93 0.90 0.93 - 0.92 - - -SEMI_1044_eye_movements_MIN 0.52 0.57 0.61 0.55 0.60 0.53 0.54 -SEMI_1053_jm1_MIN 0.26 1.00 0.16 - 0.16 0.41 - -SEMI_1217_click_prediction_small_MIN 0.04 0.03 0.04 - 0.17 - - -SEMI_1459_artificial_characters_MIN 0.68 0.99 0.83 0.99 0.67 0.61 0.52 -SEMI_155_pokerhand_MIN 0.58 0.66 0.60 0.05 0.64 0.50 0.51 -kaggle_music_hackathon_MIN 21.88 17.56 19.64 24.24 21.79 - - 21.85loan_status_MIN 0.40 0.50 0.51 0.44 0.33 - 0.48 0.46political_instability_MIN 0.81 0.89 0.89 0.89 0.89 - 0.88 -uu1_datasmash_MIN 1.00 1.00 1.00 1.00 0.61 1.00 1.00 -uu2_gp_hyperparameter_estimation_MIN 0.89 0.88 0.57 0.89 - - - 0.89uu3_world_development_indicators_MIN 2.39·10105.54·10124.12·1012-4.40·1012- - -uu3_world_development_indicators_raw 7.83·10131.04·10125.22·1011- - - - -uu4_SPECT_MIN 0.00 0.92 0.92 0.90 0.89 0.90 0.78 -uu5_heartstatlog_MIN 0.70 0.69 0.72 0.62 0.61 0.72 0.67 -uu6_hepatitis_MIN 0.00 0.47 0.89 0.40 0.27 0.31 0.44 -uu7_pima_diabetes_MIN 0.59 0.57 0.60 0.57 0.60 0.63 0.57 -uu_101_object_categories_MIN 0.95 0.89 0.84 0.34 - 0.10 - -19The average rank values obtained by different AutoML systems for each task type in the D3Mdatasets can be seen in Table 5. These datasets contain a total of 17 unique ML tasks.Table 5: Average rank values by task obtained by different AutoML systems.Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriImage Classification 1.11 2.78 2.78 4.56 4.33 6.22 7.44 8.00Tabular Classification 3.75 3.30 3.35 3.85 4.85 4.65 5.85 3.55Tabular Regression 2.27 3.18 3.00 5.73 4.27 5.73 7.54 4.36Image Regression 4.00 2.00 2.00 1.00 7.00 5.00 5.00 8.00Text Classification 2.56 3.33 2.22 3.00 3.56 5.78 4.33 8.00Audio Classification 1.50 1.00 3.50 5.00 5.50 5.00 6.00 8.00Graph Matching 1.00 3.33 3.00 2.33 4.67 3.33 6.33 8.00Time series Forecasting 3.38 3.62 2.62 2.23 7.31 5.08 5.08 8.00Link Prediction 3.33 2.33 2.33 1.67 4.67 6.67 5.00 8.00Collaborative Filtering 3.00 8.00 2.00 1.00 8.00 4.00 8.00 8.00Time series Classification 3.26 2.26 2.16 4.68 3.79 5.32 4.53 8.00Community Detection 1.00 1.00 8.00 3.33 3.33 6.33 8.00 8.00Video Classification 2.50 1.00 3.00 3.50 8.00 4.50 5.50 8.00Vertex Classification 1.00 4.00 3.25 4.25 4.00 6.50 3.50 8.00Object Detection 1.50 1.00 8.00 4.50 4.50 8.00 8.00 8.00Semisupervised Classification 3.50 2.33 2.33 6.00 2.83 6.00 6.83 8.00LUPI 5.25 3.00 1.25 4.50 5.00 2.50 4.75 8.0020Table 6 and Table 7 show the raw and normalized scores (normalized by the best score) obtainedby each system on the 39 datasets of the OpenML AutoML Benchmark (Gijsbers et al., 2019).This benchmark represents real-world data science problems and covers binary and multiclassclassification tasks. Additionally, Table 6 shows the gain of AlphaD3M regarding the other systems.Table 6: Raw scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3M Gaintask_10101 0.76 0.76 0.76 0.76 0.76 0.79 0.03task_12 0.98 0.98 0.98 0.98 - 0.96 -0.01task_146195 0.88 0.71 0.86 0.88 0.85 0.81 -0.03task_146212 1.00 1.00 1.00 1.00 1.00 1.00 0.00task_146606 0.74 0.60 0.73 0.72 - 0.73 0.03task_146818 0.91 0.86 0.84 0.90 0.87 0.87 -0.01task_146821 0.99 1.00 1.00 1.00 1.00 0.97 -0.03task_146822 0.97 0.97 0.97 0.97 0.98 0.97 0.00task_146825 0.91 - 0.91 0.90 - 0.86 -0.05task_14965 0.91 0.88 0.91 0.91 0.91 0.91 0.00task_167119 0.92 0.80 0.94 0.96 0.90 0.83 -0.08task_167120 0.51 0.51 0.51 0.51 - 0.51 -0.00task_168329 0.40 0.27 0.38 0.35 0.35 0.37 0.02task_168330 0.73 0.65 0.73 0.73 0.70 0.72 0.01task_168331 0.73 0.62 0.73 0.69 0.66 0.66 -0.02task_168332 0.56 - 0.54 0.51 0.44 0.41 -0.10task_168335 0.94 - 0.94 - 0.93 0.94 -0.00task_168337 0.84 - 0.86 0.83 0.77 0.61 -0.21task_168338 1.00 - 1.00 1.00 0.99 0.97 -0.03task_168868 0.99 0.99 0.99 1.00 0.99 0.99 0.00task_168908 0.74 0.73 0.76 0.72 - 0.77 0.03task_168909 0.99 0.96 0.99 0.98 - 0.99 0.01task_168910 0.72 0.60 0.72 0.72 0.71 0.65 -0.04task_168911 0.81 0.82 0.82 0.82 0.81 0.81 -0.01task_168912 0.93 0.92 0.95 0.95 0.95 0.94 -0.00task_189354 0.67 - 0.67 0.61 0.67 0.65 -0.01task_189355 0.94 - 0.00 - - 0.88 0.41task_189356 0.71 - 0.69 - - - -task_3 0.99 0.93 0.99 1.00 0.99 0.99 0.01task_31 0.77 0.66 0.82 - 0.82 0.77 0.00task_34539 0.95 - 0.95 0.95 0.95 0.95 -0.01task_3917 0.87 - 0.86 - 0.88 0.86 -0.01task_3945 0.98 - 0.98 0.98 0.98 0.98 0.00task_53 0.86 0.67 0.85 0.88 - 0.82 0.01task_7592 0.87 0.87 0.87 0.86 0.87 0.87 0.00task_7593 0.97 0.66 0.96 0.80 - 0.95 0.10task_9952 0.88 0.91 0.90 0.90 0.91 0.91 0.01task_9977 0.98 0.95 0.97 0.98 0.97 0.96 -0.00task_9981 0.94 0.86 0.96 0.94 0.96 0.94 0.0121Table 7: Normalized scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3Mtask_10101 0.97 0.97 0.97 0.97 0.97 1.00task_12 0.99 1.00 0.99 0.99 - 0.98task_146195 1.00 0.81 0.98 1.00 0.97 0.92task_146212 1.00 1.00 1.00 1.00 1.00 1.00task_146606 1.00 0.82 1.00 0.98 - 0.99task_146818 1.00 0.94 0.92 0.98 0.95 0.95task_146821 0.99 1.00 1.00 1.00 1.00 0.97task_146822 1.00 0.99 1.00 1.00 1.00 1.00task_146825 1.00 - 0.99 0.99 - 0.94task_14965 1.00 0.96 1.00 1.00 1.00 1.00task_167119 0.96 0.83 0.98 1.00 0.94 0.86task_167120 1.00 1.00 1.00 0.99 - 0.99task_168329 1.00 0.69 0.96 0.88 0.89 0.94task_168330 1.00 0.89 1.00 1.00 0.97 0.98task_168331 1.00 0.84 1.00 0.95 0.90 0.91task_168332 1.00 - 0.98 0.93 0.80 0.75task_168335 1.00 - 1.00 - 0.99 0.99task_168337 0.98 - 1.00 0.97 0.89 0.71task_168338 1.00 - 1.00 1.00 0.99 0.97task_168868 1.00 0.99 1.00 1.00 1.00 1.00task_168908 0.97 0.96 0.99 0.94 - 1.00task_168909 1.00 0.97 1.00 0.99 - 1.00task_168910 1.00 0.83 1.00 1.00 0.98 0.90task_168911 0.99 1.00 1.00 1.00 0.99 0.98task_168912 0.98 0.97 0.99 1.00 1.00 0.98task_189354 1.00 - 1.00 0.91 1.00 0.96task_189355 1.00 - 0.00 - - 0.94task_189356 1.00 - 0.97 - - -task_3 1.00 0.94 1.00 1.00 1.00 1.00task_31 0.94 0.80 1.00 - 1.00 0.94task_34539 1.00 - 1.00 1.00 0.99 0.99task_3917 0.99 - 0.98 - 1.00 0.98task_3945 1.00 - 1.00 0.99 1.00 1.00task_53 0.97 0.76 0.96 1.00 - 0.93task_7592 1.00 0.99 1.00 0.99 1.00 1.00task_7593 1.00 0.68 0.99 0.82 - 0.97task_9952 0.96 0.99 0.98 0.98 1.00 0.99task_9977 1.00 0.97 1.00 1.00 1.00 0.99task_9981 0.98 0.89 1.00 0.98 1.00 0.9822
YDIfeR9xKLA
71eJdMzCCIi
automl.cc/AutoML/2023/ABCD_Track
2023
AlphaD3M: An Open-Source AutoML Library for Multiple ML Tasks
["Roque Lopez", "Raoni Lourenco", "Remi Rampin", "Sonia Castelo", "A\u00e9cio S. R. Santos", "Jorge Henrique Piazentin Ono", "Claudio Silva", "Juliana Freire"]
We present AlphaD3M, an open-source Python library that supports a wide range of machine learning tasks over different data types. We discuss the challenges involved in supporting multiple tasks and how AlphaD3M addresses them by combining deep reinforcement learning and meta-learning to effectively construct pipelines over a large collection of primitives. To better integrate the use of AutoML within the data science lifecycle, we have built an ecosystem of tools around AlphaD3M that support user-in-the loop tasks, including the selection of suitable pipelines and the development of solutions for complex systems. We present use cases that demonstrate some of these features. We report the results of detailed experimental evaluations which show that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance that is comparable or superior to state-of-the-art AutoML systems.
["AutoML", "Python Library", "Multiple ML Tasks"]
AlphaD3M: An Open-Source AutoML Libraryfor Multiple ML TasksRoque Lopez1Raoni Lourenço2Remi Rampin1Sonia Castelo1Aécio Santos1Jorge Ono1Claudio Silva1Juliana Freire11New York University2University of LuxembourgAbstract We present AlphaD3M, an open-source Python library that supports a wide range of machinelearning tasks over different data types. We discuss the challenges involved in supportingmultiple tasks and how AlphaD3M addresses them by combining deep reinforcement learningand meta-learning to construct pipelines over a large collection of primitives effectively.To better integrate the use of AutoML within the data science lifecycle, we have builtan ecosystem of tools around AlphaD3M that support user-in-the-loop tasks, includingselecting suitable pipelines and developing custom solutions for complex problems. Wepresent use cases that demonstrate some of these features. We report the results of adetailed experimental evaluation showing that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance comparable or superior tostate-of-the-art AutoML systems.1 IntroductionAutomated Machine Learning (AutoML) has emerged as an alternative to automatically synthesizemachine learning (ML) pipelines, thereby democratizing ML techniques to non-experts as wellas increasing the productivity of data scientists. Different approaches have been proposed forAutoML systems. Some focus on specific components of an ML pipeline, such as hyperparameteroptimization or model selection, while others, given a dataset and a prediction task, generateend-to-end pipelines that encompass data pre-processing, feature, and model selection (Hutteret al., 2019). Most end-to-end systems are designed to work with tabular data and only supportclassification and regression problems (Feurer et al., 2015; LeDell and Poirier, 2020; Olson and Moore,2016; Kotthoff et al., 2017). Cloud AutoML (Google Cloud AutoML, 2020) and AutoGluon (Ericksonet al., 2020) also create pipelines to classify text and images and perform object detection tasks.However, these systems do not support more complex data types such as graphs, time series, audio,and video, limiting the types of problems they can address. Table 1 shows the set of task typessupported by different AutoML systems.In the context of DARPA’s Data-Driven Discovery of Models (D3M) program (Elliott, 2020),several AutoML systems have been developed to support a wide range of data types and MLtasks using an extensive set of computational primitives as building blocks – we refer to theseasmulti-task AutoML systems (MT-AutoML). MT-AutoML systems face an essential challenge:effectively searching an ample space of primitives required to synthesize pipelines for a broadrange of tasks and data types. To prune the search space, many D3M MT-AutoML systems usemanually-crafted templates and grammars (D3M, 2022) that prescribe combinations of primitivesthat make sense for different problems. This, in turn, leads to other challenges: creating thesetemplates or grammars is not only time-consuming but failing to include the necessary rules thatcover the relevant primitives (and their combination) for multiple task types can negatively impactthe ability of an MT-AutoML system to derive performant pipelines.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Table 1: Tasks supported by different AutoML Systems.SystemsTabularClassificationTextclassificationImageclassificationAudioclassificationVideoclassificationTabularRegressionClusteringTime seriesforecastingTime seriesclassificationObjectdetectionLUPICommunitydetectionLinkpredictionGraphmatchingVertexclassificationCollaborativefilteringSemisupervisedclassificationAutoGluon ✓✓✓ ✓ ✓ ✓AutoWEKA ✓ ✓Auto-Sklearn ✓ ✓Cloud AutoML ✓✓✓ ✓✓ ✓H2O ✓✓ ✓TPOT ✓ ✓AlphaD3M ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓ ✓✓✓ ✓✓✓✓✓✓ ✓✓✓We present AlphaD3M, an open-source AutoML library1that supports a wide range of dataand problem types (see Table 1). AlphaD3M introduces new techniques to navigate the large searchspaces MT-AutoML systems must navigate effectively. They include an algorithm that appliesmeta-learning to automatically derive task-based context-free grammars (CFGs) which cover amultitude of problems; and a novel search strategy that, based on previously generated pipelinesand their performance, prioritizes primitives that are correlated with good pipeline performance.AlphaD3M includes components that aim to support usability and integration with other tasksin the data science lifecycle, from data exploration and model summarization to model deployment.It is possible to extend AlphaD3M and combine it with other tools through its flexible API. Forexample, its integration with the PipelineProfile (Ono et al., 2021) allows users to explore andcompare the set of derived pipelines visually. Besides describing the API and these components, wealso present case studies demonstrating how users can improve the ML solutions via interaction inAlphaD3M.We conducted a detailed experimental evaluation to assess the ability of AlphaD3M to handlea rich set of tasks and data types as well as to compare its performance against state-of-the-artAutoML and MT-AutoML systems. We used two benchmarks: (a) a collection of 112 datasetsthat covers seventeen different ML tasks, and (b) the OpenML AutoML Benchmark for tabularclassification problems. Our results show that the search strategies used by AlphaD3M are effective:the system generates pipelines whose performance is superior or on par with those derived byother systems, including systems that focus on a small set of problems and have to navigate a muchsmaller search space.2 Related WorkTask Coverage. Many AutoML systems have been proposed to work with tabular data, for example:Auto-sklearn (Feurer et al., 2015), TPOT (Olson and Moore, 2016), and H2O (LeDell and Poirier,2020). The deep reinforcement learning algorithm proposed by Drori et al. (2019) aimed to supportmultiple learning tasks and data types, however, its implementation was limited to classificationand regression tasks over tabular and text data. AutoML systems developed in industry, such asCloud AutoML by Google and AutoGluon by Amazon, handle text and image data, but still supporta limited number of learning tasks. In contrast, AlphaD3M supports a wide range of data types(tabular, text, images, audio, video, and graph) and a rich set of ML tasks as shown in Table 1.Data and Model Exploration. Interactive data analytics systems such as Visus (Santos et al., 2019),TwoRavens (Gil et al., 2019), and Snowcat (Cashman et al., 2018) have been developed to guideusers throughout the model-building process, from exploring the input data to comparing the MLpipelines produced by AutoML systems. They target primarily domain experts who have little or1https://gitlab.com/ViDA-NYU/d3m/alphad3m2no expertise in ML and thus lack support for the customization of pipelines for complex problems.These systems trade off flexibility for ease of use. As such, they are limited to the operationsimplemented in their visual interfaces; extensive and time-consuming changes in their workflowsare required to support new data types and tasks (e.g., graph data). Other approaches mimic theinterface of traditional ML libraries, through which developers often build a single solution for agiven task (Grafberger et al., 2021). AlphaD3M allows ML experts to explore the derived pipelinesand customize them through a user-friendly interface within a Jupyter Notebook environment. Inaddition, instead of retrieving only the best pipeline, AlphaD3M returns all valid pipelines, ranks,and presents them to the user for comparison, refinement, and selection.3 The AlphaD3M LibraryFigure 1: Overview of AlphaD3M.AlphaD3M is a multi-task Au-toML system. It is imple-mented in Python and canbe used via pipinstallationor Docker. Figure 1 showsan overview of this libraryand its components. Tobuild ML pipelines, AlphaD3Muses a rich set of primitivesand a meta-learning databasefrom the D3M ecosystem D3M(2022). The pipeline search is conducted by four modules which: (a) automatically construct oftask-specific grammars; (b) prioritize primitives that are more likely to be effective; (c) synthesizepipelines using Monte Carlo Tree Search and Neural Networks (Drori et al., 2019); and (d) tunehyperparameters. The library implements a Python API through which users can define the problemto be solved, explore the input data, obtain model summaries, analyze and compare the producedpipelines, as well as improve and deploy them.3.1 The D3M EcosystemPrimitives. AlphaD3M uses a comprehensive collection of primitives developed by performersin the D3M program as well as from open-source libraries (e.g., scikit-learn). In total, there are312 primitives available for different steps in ML pipelines, including data pre-processing, featureextraction, feature selection, prediction, and clustering (D3M Primitives, 2022), and implementstate-of-the-art methods, such as ResNet50 (He et al., 2016), ARIMA (Wilson, 2016), among others.The Marvin Meta-Learning Database. Marvin is an open corpus of curated ML pipelines, datasets,and problems (Marvin, 2020). All pipelines in Marvin share the same set of primitives and arespecified using the D3M format. Marvin stores approximately 2.5 million pipelines executed over600 datasets. Since data scientists and AutoML systems that use different search strategies haveproduced these pipelines, the database covers a wide variety of pipeline patterns. As discussedbelow, we leverage the data in Marvin to assist in and improve the AlphaD3M search process. Tothe best of our knowledge, ours is the first work that explores this corpus.3.2 Pipeline SearchThe automatic synthesis of pipelines is a combinatorial problem in which we must find the bestcombinations of primitives and their hyperparameters. With 312 primitives and over 1,500 hy-perparameters in the D3M ecosystem, the search space becomes prohibitively large. For instance,considering just the classification task over tabular data, there are 22 data cleaning, 87 data trans-formation, and 44 classifier primitives, leading to 84,216 possible pipelines to test. AlphaD3M usesa multi-pronged approach to manage this search space described below.3APipeline Synthesis Using Monte Carlo Tree Search and Neural Networks. To synthesize theML pipelines, AlphaD3M uses the strategy introduced by Drori et al. (2019), which is based on asingle-player game technique inspired by AlphaZero (Silver et al., 2017). It applies model-basedreinforcement learning with a neural network sequence model, and a Monte Carlo Tree Search(MCTS). The metadata encoding the pipeline, the dataset, and the task are analogous to an entiregame board configuration in AlphaZero. The possible game states consist of all valid pipelinesgenerated from a set of primitives and modified by actions guided by a manually-designed CFG.The model outputs a sequence of primitives. Pipelines are constructed by an LSTM. Given a state scomposed of a vector encoding the whole board configuration (dataset, task, pipeline), the neuralnetwork predicts the probabilities P(s,a)over actions afrom a state s. This process produces aset of action sequences Sthat describe a pipeline, which in turn solves task Ton datasetD. Thenetwork also outputs an estimate of pipeline performance v. The reinforcement learning algorithmtakes the predictions (P(s,a),v(s))produced be the neural network and uses them in the MCTS byrunning multiple simulations to search for the pipeline sequence Rwith the best evaluation. Animportant benefit of this strategy is that it learns to synthesize pipelines.BAutomatic Generation of Task-Based CFG via Meta-Learning. Manually designed CFGs havemany limitations, notably they may not cover all applicable rules and pipeline structures andconsequently prevent the search process from exploring desirable pipelines that do not fit thegrammar. Furthermore, to create the production rules or patterns in the grammar, a user needsto have knowledge of all the available primitives for a specific task and how they work. For largeprimitive collections, this is a difficult task, which is compounded for MT-AutoML systems thatsupport multiple problem types. Instead of relying on manually created CFGs, we propose a newstrategy that uses meta-learning to derive grammars automatically and on the fly. It does so in twosteps: 1) it selects task-specific pipelines and datasets from a meta-learning database (MLDB), and2) uses these to derive a portfolio of pipeline patterns.Selecting Task-Oriented Datasets. Since AlphaD3M supports different tasks, we need to retrievefrom the Marvin MLDB pipelines produced for tasks and datasets similar to the ones we provided asinputs to the AutoML system. For instance, if we want to solve a clustering problem over a datasetD, we retrieve the pipelines used for this problem over datasets similar to D. To select relevantpipelines for a given problem Pover dataset D, we use the “task keywords" tag list provided in theproblem definition as features that describe the task to be solved, and search Marvin for pipelinesthat contain a similar set of keywords. The list is encoded as a bag-of-words (BOW). Since the setis small and most of the tags are non-standard words, e.g., collaborativeFiltering, timeSeries , it ispossible to obtain accurate matches with this simple approach.Given the set of relevant pipelines RP, we select a subset RPDcontaining pipelines that wereapplied on datasets similar to D. To determine whether two datasets are similar, we use datasetfeatures including semantic types (e.g., categorical, date-time) and missing values, and encode themusing one-hot encoding. Datasets are compared using cosine similarity.The current implementation uses 16 unique semantic types detected by the data-mart_profiler (Datamart Profiler Library, 2021). In contrast to other approaches like TabSim(Habibi et al., 2020), or StruBERT (Trabelsi et al., 2022), AlphaD3M uses semantic types because, inthe grammar, it defines components to handle the dataset’s features, such as categorical or date-timeencoders, and these components are strongly related to semantic types. Also, these approachesfocus on tabular datasets, AlphaD3M handles other types of datasets, like image and text datasets.Finally, running these approaches is a very time-consuming task.Creating a Portfolio of Patterns. After identifying similar datasets, the next step is to select the bestpipelines to create a portfolio of pipeline patterns. To select these AlphaD3M takes into considerationpipeline performance for different datasets. Some datasets are more challenging than others – theperformance of a pipeline can vary widely for different datasets. To properly compare pipeline4performance, AlphaD3M uses a strategy based on the average distance to minimum (ADTM) (Wistubaet al., 2015), which transforms the performance to the distance to the best-observed performancescaled between 0 and 1. In contrast to ADTM, which uses the misclassification rate, AlphaD3Muses the actual performance (the score) of the pipelines and thus, it applies the average distance tomaximum instead to select the best pipelines. It then transforms the primitives within the pipelinesto their classes. For instance, the primitive imputer.SKlearn belongs to the class IMPUTATION . Ifthere is a pipeline with this structure: [ imputer.SKlearn svm.SKlearn ], it is converted to this pattern:[IMPUTATION CLASSIFICATION ]. Unlike Feurer et al. (2021), which creates a unique portfolioof pipelines in an offline phase, AlphaD3M creates the portfolio online, based on the query taskand dataset. Also, the output is a portfolio of patterns, not of static pipelines, which allows moreflexibility to construct pipelines. These patterns are used as production rules of the grammar.Algorithm 1 in the Appendix describes the process of building the grammar.CPrioritization of Primitives. When a data scientist builds an ML pipeline, they start this processusing primitives that are known to perform well. For example, XGBoost or Random Forests aregood initial candidates for classification tasks. AlphaD3M follows this intuition to identify goodcandidate primitives for a specific task, using the data from Marvin. This prior knowledge aboutpromising primitives can be helpful to find better pipelines faster.Similar to Ono et al. (2021), AlphaD3M uses Pearson Correlation (PC) to estimate how mucha primitive contributes to the score of the pipeline. However, instead of using the raw scores, ituses the ADTMs values because they are scaled across different datasets. AlphaD3M estimatesthe primitive importance using PC between the primitive indicator vector p(pi=1if pipelineicontains the primitive in question and pi=0otherwise) and the pipeline score vector s, wheresiisthe score for pipeline i. Sincepandsare dichotomous and quantitative variables, respectively, thePoint-Biserial Correlation coefficient (PBC) Sheskin (2003) is an appropriate correlation measure – itis mathematically equivalent to the PC but can be calculated with fewer operations. The correlationvalues are normalized between 0 and 1 (using min-max normalization).AlphaD3M calculates these correlations for the primitives at two levels: (a) global, when itconsiders all the pipelines, and (b) local, when it considers only the pipelines for each pattern.The main goal is to estimate how important a primitive is for all the pipelines and each pattern.Primitives with higher values of importance should have priority during the search of pipelines.Algorithm 2 describes the process of calculating the primitive importance values in detail (see theAppendix). To prioritize the usage of potential primitives in AlphaD3M, it includes these values ofprimitive importance in the MCTS formula:U(s,a)=Q(s,a)+c(αP(s,a)+( 1−α)R(a))√︁N(s)1+N(s,a)(1)whereQ(s,a)is the expected reward for action a(selection of primitive a) from state s,N(s,a)isthe number of times action awas taken from state s,N(s)is the number of times state swas visited.P(s,a)are the probabilities predicted by the neural network over actions afrom a state s,cis aconstant which determines the amount of exploration, R(a)=G(a)∗L(a),G(a)andL(a)are theglobal and local importance of the action a, andαis a coefficient to keep the trade-off betweenR(a)andP(s,a).DDecoupled Hyperparameter Tuning. Hyperparameter tuning is an essential part of fitting machinelearning models (Bergstra et al., 2011; Snoek et al., 2015; Dolatnia et al., 2016). This is also the casefor end-to-end ML pipelines that target different tasks, and all primitives contain hyperparameters,not just the estimators.AlphaD3M performs hyperparameter tuning as an independent task, after the pipelines areconstructed. It uses Bayesian optimization, which is the state-of-the-art for hyperparameter tuning5Figure 2: (a) A code snippet to solve a semi-supervised classification task. (b) AlphaD3M allows usersto inspect the contents of the input dataset, including column statistics and data types. (c)Analyzing ML pipelines through the integration with PipelineProfiler.(Bergstra and Bengio, 2012; Snoek et al., 2015; Dolatnia et al., 2016) and was shown to outperformmanual setting of parameters, grid search, and random search (Bergstra and Bengio, 2012; Turneret al., 2021).Tuning Top- kPipelines. AlphaD3M synthesizes and evaluates the pipelines using primitives withdefault values for hyperparameters. The pipelines are then ranked by performance, and the top-kpipelines are selected for tuning. AlphaD3M uses Sequential Model-Based Algorithm Configuration(SMAC) (Lindauer et al., 2022), a Python library for Bayesian optimization. It approximates aprobability model of the performance outcome given a parameter configuration that is updatedfrom a history of executions. AlphaD3M selects the Gaussian Processes models from SMAC tominimize an arbitrary acquisition function using the Expected Improvement criterion to choose theparameter values for each iteration until a condition (number of iterations) is met. The acquisitionfunction is designed to normalize the performance metric used to synthesize the pipelines betweenzero and one, as the pipeline execution evaluations increase, the acquisition function gets closer tozero. SMAC requires a set of unique parameters to assign values during its tuning procedure. SinceAlphaD3M considers multiple primitives with identical names, it constructs an internal hierarchicalnomenclature of parameters and designs their dependencies using ConfigSpace.3.3 The APIWe have developed a Python-based API that supports the process of building and exploration of MLpipelines within a Jupyter Notebook environment. The API is integrated with the D3M AutoMLsystems and supports various dataset formats such as raw CSV, D3M, and OpenML. Model synthesiscan be done with a few lines of code, as shown in Figure 2(a). The API allows users to (a) define aproblem, (b) explore summaries of their input dataset, (c) summarize the produced pipelines and (d)analyze and compare pipelines with respect to their performance scores and prediction outputs.We describe the main components of the API below.Problem Definition. To build a predictive model, AlphaD3M needs a problem specification thatdescribes a prediction problem, specifically: (a) the training dataset; (b) a target variable, i.e., whatshould be predicted by the predictive model; (c) the maximum running time that controls how longthe search can take (to control the use of computational resources); (d) the desired performancemetric; and (e) a list of task keywords that specify the kind of prediction task and, therefore, thetechniques that should be used to solve the prediction problem. Figure 2(a) shows an example ofhow to define a problem in AlphaD3M.6Table 2: Comparison of MT-AutoML systems with respect to the number of supported task types,winner pipelines, and average rank by each system.AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Unique ML tasks supported 17 16 15 17 15 16 14 2Winner pipelines 49 39 30 21 20 11 10 7Average rank 2.85 2.89 2.90 3.99 4.68 5.32 5.73 6.85Data Exploration. To build good predictive models, it is important to identify data attributes thatlead to accurate predictions. The API provides multiple tools for data exploration. For example, itshows different visualizations (compact, detail, and column views) that summarize the content oftabular datasets (see Figure 2 (b)).Pipeline Summary. After the pipeline search is complete, users can display a leaderboard, trainindividual pipelines with the complete data, perform predictions and evaluate them against aheld-out dataset.Pipeline Exploration. Users can analyze the produced pipelines using the PipelineProfiler Onoet al. (2021), which is fully integrated into AlphaD3M as shown in Figure 2(c). PipelineProfiler isa visual analytics tool that enables users to compare and explore the pipelines generated by theAutoML systems.Pipeline Refinement and Deployment. AlphaD3M allows users to save and load pipelines, enablingusers to reload them later and perform analyses without having to re-run the AutoML search.They can load the saved pipelines at any time for training or testing purposes. In addition, userscan export pipelines to Python code. This gives them more control and the ability to modify(and customize) the automatically generated pipelines (e.g., change hyperparameters, or replacea classifier primitive). More information about the API can be found on the documentation webpage: https://alphad3m.readthedocs.io/en/latest/api.html .4 EvaluationTo demonstrate the effectiveness of AlphaD3M and its ability to handle a rich set of ML tasks, wecompared AlphaD3M with state-of-the-art AutoML systems using two dataset collections. We alsopresent use cases to show how useful, flexible, and easy to use AlphaD3M is.4.1 Comparing AutoML SystemsD3M Datasets. This collection contains challenging datasets and cover a wide variety of tasks (atotal of 17 task types) and data types (see Table 3). We evaluated all the systems using train and testsplits. In most of the cases, the sizes are 0.8 and 0.2 for the train and test splits, respectively (see thedataset’s repository2for details). For each dataset, we ran the systems over the train split for onehour, a time-bound used by others works (Erickson et al., 2020; Feurer et al., 2021). After that, weevaluated the best pipeline produced by each system in the test split. For this experiment, we used1 GPU (GeForce GTX 1080 Ti), 14 CPU cores (Intel Xeon E5-2695 v4, 2.10 GHz), and 56 GB memory.Table 2 shows the number of supported task types (ML tasks), winner pipelines (i.e., pipelineswith the best performance for a given dataset), and the average rank by each AutoML system (rankof each system among the 8 AutoML systems applied to each dataset). If two or more systemsproduce pipelines that tie in the best score, all of them are considered winner pipelines. As we cansee, AlphaD3M and Aika were able to solve 17 out of 17 unique tasks, obtaining the best coverage.We also evaluated the effectiveness of AlphaD3M. It had the best overall performance, producingthe best pipeline for 49 datasets with the best average rank (2.85). Analyzing the support for each2https://datasets.datadrivendiscovery.org/d3m/datasets7Table 3: Number of datasets by task type and number of solved datasets by each AutoML system forall task types covered by the D3M datasets.ML Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Tabular Classification (20) 20 19 18 20 18 17 13 20Tabular Regression (11) 11 11 11 8 9 6 5 9Image Classification (9) 9 8 9 9 7 7 2 0Image Regression (1) 1 1 1 1 1 1 1 0Text Classification (9) 9 9 9 9 8 8 9 0Audio Classification (2) 2 2 2 2 1 2 2 0Graph Matching (3) 3 3 3 3 2 2 2 0Time series Forecasting (13) 13 13 13 13 2 12 10 0Link Prediction (3) 3 3 3 3 2 2 2 0Collaborative Filtering (1) 1 0 1 1 0 1 0 0Time series Classification (19) 19 19 19 17 19 15 19 0Community Detection (3) 3 3 0 2 2 1 0 0Video Classification (2) 2 2 2 2 0 2 2 0Vertex Classification (4) 4 4 4 4 4 4 4 0Object Detection (2) 2 2 0 1 1 0 0 0Semisupervised Classification (6) 6 6 6 3 6 4 3 0LUPI (4) 4 4 4 4 4 4 4 0task type individually in Table 3, we can see that AlphaD3M was able to produce valid pipelinesfor all the datasets and it solved more datasets than the other systems. Even though AlphaD3M isinspired by Drori et al. (2019), in Table Table 2 and Table 3, we can clearly see the difference betweenthem, AlphaD3M handles a larger number of tasks and produces many more winned pipelines.This shows that the different components of AlphaD3M are effective at handling the larger searchspaces required by MT-AutoML systems. The detailed scores obtained by each system in all theD3M datasets and the average rank by tasks can be found in Table 4 and Table 5 (Appendix).Additionally, we calculated the number of winner pipelines for the top-3 systems only in thedatasets where all of them produced pipelines. AlphaD3M, Ensemble, and AutonML systems got 48,42, and 38, respectively. These results confirm that the superior performance of AlphaD3M is notsolely due to its support for a broader range of ML tasks.Figure 3: Ablation study for the different components of AlphaD3M.We performed an ablationstudy to analyze the contribu-tion of each component of Al-phaD3M on a random sample offive D3M datasets for classifica-tion tasks2(datasets for whichAlphaD3M obtained the best, av-erage and worst performances).Figure 3 shows the best scoresfor each dataset reached by thefull AlphaD3M and the versionswith some components removed(or replaced). As we can see, us-ing all components leads to thebest results.To evaluate the importance of the automatic grammar, we replaced it with the manually-designed grammar used in Drori et al. (2019). For POKER ,SPECTRO ,WORDS , and SICK datasets,when the manual grammar was used, AlphaD3M was not able to produce valid pipelines, whichhighlights the importance of automatically generating the grammar. These datasets contain multi-ple types of features like text, DateTime, etc., which were not covered by the manually-constructed8Figure 4: Performance of AutoML systems in OpenML Benchmark. X-axis shows the accuracy values(normalized by the best score), and Y-axis shows the IDs of the OpenML tasks.grammar. The prioritization of primitives also plays an important role in AlphaD3M. When thisfeature was not used, the performance decreased, e.g. in POKER ,SPECTRO , and LIBRAS datasets. Aswe can see in Figure 3, in most of the datasets, when we removed the hyperparameter tuning com-ponent, AlphaD3M obtained the same results. This suggests that the heuristic used by AlphaD3M(tuning only the top- kpipelines) may miss good pipelines that would attain better performanceafter tuning. In future work, we plan to investigate alternative strategies for hyperparameter tuningthat attain a better balance of computational cost and pipeline performance.OpenML Benchmark. Similar to Erickson et al. (2020), we compared our system with AutoWEKA,TPOT, H2O, AutoGluon, and Auto-Sklearn 2.0 (hereinafter referred to as Auto-Sklearn) on the 39OpenML datasets (Gijsbers et al., 2019). This corpus contains a variety of datasets intended torepresent real-world data science problems and covers binary and multiclass classification tasks.We used AMLB (Gijsbers et al., 2022) to compare the systems, running them locally for one hourusing 1 fold split and accuracy as the optimization metric. For this experiment, we used 4 CPUcores (Intel Xeon Platinum 8268 Processor, 2.9 GHz) and 32 GB memory.Figure 4 shows the scores (normalized by the best score) of all the systems (the detailed scorescan be found in Tables 6 and 7 in the Appendix). As we can see, AlphaD3M produced pipelineswhose performance is on par with the other AutoML systems. We also calculated the averagerank for all the systems for the 39 datasets. AlphaD3M got 3.64 of average rank, while Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA got 2.08, 2.33, 3.08, 3.72, and 5.10, respectively.To understand better these numbers, we also estimated the performance gain of the pipelines foundby AlphaD3M against pipelines generated by other systems. The average gain of AlphaD3M forthe OpenML datasets was +0.001, which shows that, in general, AlphaD3M attained good resultsfor this collection. We analyzed the 3 datasets ( task_146195 ,task_167119 andtask_168331 ) forwhich AlphaD3M generated pipelines with performance lower than other systems. This happenedbecause these datasets are imbalanced with multiple classes. The performance of AlphaD3M forthese could be improved with the inclusion of primitives to handle imbalanced datasets. Thisunderscores the importance of being able to add primitives to AutoML systems.Concerning the coverage, it is important to highlight that AlphaD3M succeeded for 38 datasets.Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA solved 39, 39, 34, 29, and 28 datasets,respectively. As pointed out by Gijsbers et al. (2022), the results of Auto-Sklearn on the OpenMLdatasets must be considered very carefully, since there could be an overlap between the datasetsused in its meta-learning process and the ones used in the evaluation. It’s important to highlightthat none of the OpenML datasets are included in the version of Marvin that was used by AlphaD3Min these experiments.94.2 Use CasesPivoting across ML tasks. Predicting hostile actions against ships and mariners worldwide isimportant to prevent piracy and prosecute the aggressors. Consider that an analyst from the U.S.National Geospatial-Intelligence Agency (NGA) is building a model using the Anti-Shipping ActivityMessages dataset (ASAM, 2021). She wants to identify which records mention guns and whichrecords do not. This is a non-trivial problem since a variety of terms (e.g., pistol, rifle, etc.) indicatewhether a gun is present. This dataset contains 8,000 documents, of which 1,400 were annotated.She started by using AlphaD3M to create models using the 1,400 labeled documents setting themodel search to 1 hour. AlphaD3M derived high-quality pipelines – the best pipeline had 0.90 ofF1. However, she wondered whether these pipelines could be further improved, in particular, byleveraging the 6,600 unlabeled documents through semi-supervised learning. AlphaD3M supportsa wide range of tasks, including semi-supervised learning – users just need to add the keyword“semiSupervised” as a parameter. The user then ran a new experiment using the 1,400 labeled and6,000 unlabeled instances as a training dataset. The results improved from 0.90 to 0.95 of F1. Theseexperiments show that by using AlphaD3M, data scientists can improve the results, pivoting fromone task (classification) to another (semi-supervised classification) very quickly.Reducing pipeline execution time through models exploration. Using content analysis andpredictive modeling for conflict assessment is a common approach for conflict analysts to guidepolicy-making decisions D’Orazio (2020). Consider a conflict analyst trying to categorize explosionevents that involve terrorist activities. She uses the explosion events dataset (Raleigh et al., 2010)that contains 20,000 articles describing events that involve terrorist activities. An article is relevantif it describes attacks involving explosions. To create classification models, she ran AlphaD3M for 1hour. The system synthesized high-quality pipelines, with F1 values around 0.9. To identify themost suitable pipeline, she used the PipelineProfiler to explore the derived models. She observedthat the top-10 pipelines had similar scores but their execution times were above 800 seconds. Toaddress this problem, she tried a different strategy: combining progressive sampling and activelearning to reduce the number of training data from 20,000 to 3,200 documents. Then, she re-ranAlphaD3M using the smaller set as the training dataset, while keeping the rest of the workflowunchanged. The top F1 score improved from 0.91 to 0.96 and the time from 800 to 125 seconds.5 ConclusionsWe introduced AlphaD3M, an MT-AutoML library that automatically synthesizes end-to-endpipelines for 17 ML tasks and 6 different data types. AlphaD3M introduces new methods to auto-matically derive grammars and prioritize primitives, which are essential for effectively managingthe large space MT-AutoML systems must search. In addition, AlphaD3M embraces a user-in-the-loop approach, through an API that allows the users to explore the input data and the derived MLpipelines, as well as customized the pipelines. We presented a detailed experimental evaluationthat compares our approach to several state-of-the-art AutoML systems over different problemsand datasets. The results suggest that AlphaD3M is effective: not only does it solve a larger numberof problem types, but it also derives pipelines with performance that is superior or on par withthose derived by other systems.Although AlphaD3M’s approach is primitive-agnostic, so far, it only relies on the D3M primitivesto build ML pipelines. We plan to extend AlphaD3M by including additional state-of-the-artand more-recent primitives, e.g., models published in HuggingFace or PyTorch Hub repositories.Moreover, we would like to improve the system interoperability with existing open-source primitivesthat use standard APIs such as the well-known scikit-learn’s fit-predict API.Acknowledgements. This work was partially supported by the DARPA D3M program. Anyopinions, findings, conclusions, or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of DARPA.10ReferencesASAM (2021). ASAM: Anti-Shipping Activity Messages. https://msi.nga.mil/Piracy .Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for Hyper-Parameter Opti-mization. In Proceedings of NIPS , pages 2546–2554.Bergstra, J. and Bengio, Y. (2012). Random Search for Hyper-parameter Optimization. JMLR , pages281–305.Cashman, D., Humayoun, S. R., Heimerl, F., Park, K., Das, S., Thompson, J., Saket, B., Mosca, A.,Stasko, J. T., Endert, A., Gleicher, M., and Chang, R. (2018). Visual Analytics for AutomatedModel Discovery. CoRR .D3M (2022). D3M Website. https://datadrivendiscovery.org .D3M Primitives (2022). D3M Primitives Website. https://gitlab.com/datadrivendiscovery/primitives/-/tree/master/primitives .Datamart Profiler Library (2021). Datamart Profiler Website. https://pypi.org/project/datamart-profiler/ .Dolatnia, N., Fern, A., and Fern, X. (2016). Bayesian Optimization with Resource Constraints andProduction. In Proceedings of ICAPS , pages 115–123.D’Orazio, V. (2020). Conflict Forecasting and Prediction. In Oxford Research Encyclopedia ofInternational Studies . Oxford University Press.Drori, I., Krishnamurthy, Y., Lourenco, R., Rampin, R., Cho, K., Silva, C., and Freire, J. (2019).Automatic Machine Learning by Pipeline Synthesis using Model-based Reinforcement Learningand a Grammar. In 6th ICML Workshop on Automated Machine Learning .Elliott, J. (2020). DARPA Data-Driven Discovery of Models (D3M) Program. https://www.darpa.mil/program/data-driven-discovery-of-models .Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 .Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2021). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M.,and Garnett, R., editors, Advances in Neural Information Processing Systems , volume 28. CurranAssociates, Inc.Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren,J. (2022). Amlb: an automl benchmark.Gijsbers, P., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J. (2019). An Open SourceAutoML Benchmark. In 6th ICML Workshop on Automated Machine Learning .Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., Gadewar, S., Yang, Q., and Jahanshad, N.(2019). Towards Human-guided Machine Learning. In Proceedings of the Conference on IntelligentUser Interfaces (IUI) , pages 614–624. ACM.11Google Cloud AutoML (2020). Google Cloud AutoML Website. https://cloud.google.com/automl .Grafberger, S., Guha, S., Stoyanovich, J., and Schelter, S. (2021). MLINSPECT: a Data DistributionDebugger for Machine Learning Pipelines. age, 20:123.Habibi, M., Starlinger, J., and Leser, U. (2020). Tabsim: A Siamese Neural Network for AccurateEstimation of Table Similarity. In 2020 IEEE International Conference on Big Data (Big Data) ,pages 930–937. IEEE.He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778.Hutter, F., Kotthoff, L., and Vanschoren, J. (2019). Automated Machine Learning: Methods, Systems,Challenges . Springer.Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., and Leyton-Brown, K. (2017). Auto-WEKA 2.0:Automatic Model Selection and Hyperparameter Optimization in WEKA. The Journal of MachineLearning Research , 18(1).LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable Automatic Machine Learning. 7th ICMLWorkshop on Automated Machine Learning (AutoML) .Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Ruhkopf, T.,Sass, R., and Hutter, F. (2022). Smac3: A versatile bayesian optimization package for hyperpa-rameter optimization. Journal of Machine Learning Research , 23(54):1–9.Marvin (2020). Marvin Website. https://datadrivendiscovery.org/marvin .Olson, R. S. and Moore, J. H. (2016). TPOT: A Tree-based Pipeline Optimization Tool for AutomatingMachine Learning. In ICML AutoML Workshop , pages 66–74.Ono, J. P., Castelo, S., López, R., Bertini, E., Freire, J., and Silva, C. T. (2021). PipelineProfiler: AVisual Analytics Tool for the Exploration of AutoML Pipelines. IEEE Transactions on Visualizationand Computer Graphics , 27:390–400.Raleigh, C., Linke, A., Hegre, H., and Karlsen, J. (2010). Introducing ACLED: An Armed ConflictLocation and Event Dataset: Special Data Feature. Journal of peace research , 47(5):651–660.Santos, A., Castelo, S., Felix, C., Ono, J. P., Yu, B., Hong, S. R., Silva, C. T., Bertini, E., and Freire,J. (2019). Visus: An Interactive System for Automatic Machine Learning Model Building andCuration. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) , pages1–7. Association for Computing Machinery.Sheskin, D. J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures . crc Press.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,Kumaran, D., Graepel, T., et al. (2017). Mastering Chess and Shogi by Self-Play with a GeneralReinforcement Learning Algorithm. Conference on Neural Information Processing Systems .Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M. M. A., Prabhat,P., and Adams, R. P. (2015). Scalable Bayesian Optimization Using Deep Neural Networks. InProceedings of the ICML , pages 2171–2180.Trabelsi, M., Chen, Z., Zhang, S., Davison, B. D., and Heflin, J. (2022). StruBERT: Structure-awareBERT for Table Search and Matching. arXiv preprint arXiv:2203.14278 .12Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2021). BayesianOptimization is Superior to Random Search for Machine Learning Hyperparameter Tuning:Analysis of the Black-Box Optimization Challenge 2020. CoRR , abs/2104.10201.Wilson, G. T. (2016). Time Series Analysis: Forecasting and Control, 5th Edition. Journal of TimeSeries Analysis , 37(5):709–711.Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015). Learning Hyperparameter OptimizationInitializations. In 2015 IEEE international conference on data science and advanced analytics(DSAA) , pages 1–10. IEEE.13A Broader Impact StatementAlphaD3M can potentially strengthen the efforts in democratizing data science by broadening theapplication of automated predictive pipelines. Subject experts can create their own pipelines andexplore them in the context of an ethical framework. Its interoperable software infrastructureenables external auditing and improves the trust and interpretability of synthesized pipelines.The search space management mechanism also allows efficient resource allocation and helps toprototype pipelines before performing high energy-consuming model training.B Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See it mainly in Section 3 and 4.(b)Did you describe the limitations of your work? [Yes] See Section 5. We also discuss theinfeasibility of AutoML system in general, and our efforts to mitigate limitations.(c)Did you discuss any potential negative societal impacts of your work? [No] However, weadvocate for the necessity of human-in-the-loop to build trust in the generated pipelines.(d)Have you read the ethics review guidelines and ensured that your paper conforms to them?https://automl.cc/ethics-accessibility/ [Yes] Our paper follows these guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We are not includingtheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We are not includingtheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We provide a link to our public GitLab repository and documentationwebpage, where users can find information about the installation and instructions to runour system. The reported evaluation was conducted by a third (independent) party in acompetition among AutoML systems, so we can not release that code.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] See the scripts/paper_automlconference folder in our repository.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Seethescripts/paper_automlconference folder in our repository.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] Our code is well documented and follows codingstandards and best practices. We provide different Jupyter notebook examples and an APIto show how to use AlphaD3M.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [No] We do not specify allthe details.14However, some details, like the data split and search spaces are publicly available in thereferences.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] See Section 4.1.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 4.1.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Wepresented two comparisons (see Section 4). For the first comparison, we used the sameprotocol. For the second one, we used an existing asset and we evaluated our system usingthe same time protocol.(i)Did you compare performance over time? [No] We ran the systems during one hour, atime-bound used by others works (Erickson et al., 2020; Feurer et al., 2021), and reportedthe best score during this time.(j)Did you perform multiple runs of your experiments and report random seeds? [N/A] Wedo not perform multiple runs of our experiments.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not report error bars.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We did notuse surrogate benchmarks.(m)Did you include the total amount of compute and the type of resources used (e.g., typeofgpus, internal cluster, or cloud provider)? [No] Some of the reported evaluations wereconducted by a third party.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] The hyperparameters were automaticallytuned by our AutoML engine.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4.1.(b)Did you mention the license of the assets? [No] However, all assets are publicly availableand the licenses can be retrieved from the references.(c)Did you include any new assets either in the supplemental material or as a url? [Yes] Weincluded a urlto the data used in the experiments.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The assets used in this paper are publicly available.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The data used do not contain personally identifiableinformation neither offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not carry out a user study.15(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not carry out a user study.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not carry out a user study.C Additional DetailsC.1 AlgorithmsAlgorithm 1 describes the process of building the grammar. getVectorTK andgetVectorST repre-sent the BOW and one-hot encoding functions, respectively. The best values empirically calculatedfor the thresholds tsimandtperfare 0.8 and 0.5, respectively.Algorithm 1 Grammar BuilderInput: Marvin datasets D, query dataset q, thresholdtInitializeS=[]// Similar datasetsfordiinDdosimTK =cosineSimilarity(getVectorTK(di),getVectorTK(q))ifsimTK >tsimthensimST =cosineSimilarity(getVectorST(di),getVectorST(q))ifsimST >tsimthenAddditoSInitializeP=calculateADTM(S)InitializeR=[]// Production RulesforpiinPdoifperformance(pi)>tperfthenri=convertToPattern(pi))AddritoRreturnRAlgorithm 2 describes the process of calculating the primitive importance values in detail. Forinstance, the primitive importance values calculated for XGBoost and Random Forrest are 0.62 and0.56, whereas for Nearest Centroid and K-Nearest Neighbors the values are 0.46 and 0.44. It showsthat the importance values can be used as an indicator to prioritize the usage of primitives.Algorithm 2 Primitives ImportanceInput: PipelinesP, PatternsTInitializeR=getPrimitives(P)InitializeG,L=[]// Global and Local correlationsforriinRdopc=PearsonCorrelation (ri,P)npc=normalize(pc)AddnpctoGfortiinTdopi=getPipelines(ti,P)R=getPrimitives(ti,pi)forriinRdopc=PearsonCorrelation (ri,R)npc=normalize(pc)AddnpctoLreturn(G,L)16C.2 GrammarsDifferent tasks require different grammars. For instance, the algorithms needed to solve time-series and semi-supervised classification problems have a different structure and use a differentset of primitives. Consequently, specialized grammars and production rules are needed for eachtask. Manually creating these grammars is time-consuming and error-prone, and relying on thesegrammars can limit the effectiveness of the AutoML systems with respect to problem coverage andquality of the derived pipelines.Figure 5 shows an excerpt of a grammar automatically generated in AlphaD3M to solve classi-fication problems. The start symbol ( S) is the starting point from which all the production rulescan be derived. In the grammar, the terminal ‘primitive’ can be any of the available algorithms inAlphaD3M, and ‘E’represents the empty symbol.S ::= CATEGORICAL_ENCODER TEXT_FEATURIZER DATA_CONVERSION IMPUTATION CLASSIFICATIONS ::= TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER DIMENSIONALITY_REDUCTION CLASSIFICATIONS ::= DATA_STRUCTURE_ALIGNMENT IMPUTATION CLASSIFICATIONS ::= IMPUTATION FEATURE_SCALING CLASSIFICATIONS ::= IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION DIMENSIONALITY_REDUCTION CLASSIFICATIONIMPUTATION ::= 'primitive '|'E'CATEGORICAL_ENCODER ::= 'primitive '|'E'FEATURE_SCALING ::= 'primitive '|'E'FEATURE_SELECTION ::= 'primitive '|'E'DIMENSIONALITY_REDUCTION ::= 'primitive '|'E'DATA_CONVERSION ::= 'primitive 'TEXT_FEATURIZER ::= 'primitive 'DATA_STRUCTURE_ALIGNMENT ::= 'primitive 'CLASSIFICATION ::= 'primitive 'Figure 5: Excerpt of a grammar automatically generated by AlphaD3M for classification tasksIn Figure 6, you can see the manual grammar used in the experiments. This grammar wasproposed by Drori et al. (2019). To generate this grammar for classification and regression tabulartasks, a developer was asked to review manually the primitives to group them into categories. Forinstance, the primitives decision _tree.SKlearn andrandom _forest.SKlearn were grouped into thecategory ‘CLASSIFICATION’. Then, using his knowledge in ML, he created the production rules ofthe grammar using these categories.S ::= CLASSIFICATION_TASK | REGRESSION_TASKCLASSIFICATION_TASK ::= CLASSIFICATION | DATA_CLEANING CLASSIFICATION | DATA_TRANSFORMATION CLASSIFICATION |DATA_CLEANING DATA_TRANSFORMATION CLASSIFICATIONREGRESSION_TASK ::= REGRESSION | DATA_CLEANING REGRESSION | DATA_TRANSFORMATION REGRESSION |DATA_CLEANING DATA_TRANSFORMATION REGRESSIONCLASSIFICATION ::= 'primitive 'REGRESSION ::= 'primitive 'DATA_CLEANING ::= 'primitive 'DATA_CLEANING | 'E'DATA_TRANSFORMATION ::= 'primitive 'DATA_TRANSFORMATION | 'E'Figure 6: Manual GrammarC.3 ExperimentsIn Table 4, we can see the scores obtained by all AutoML systems developed in the D3M program,including a majority voting ensemble system, on a collection of 112 datasets2. This collection17contains challenging datasets that go beyond the simple tabular data and cover a wide variety oftasks and data types.Table 4: Scores obtained by AlphaD3M and the other AutoML systems developed in the D3M program.Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori124_120_mnist_8747 0.98 0.94 0.46 0.18 0.94 0.11 - -124_138_cifar100_1858 0.67 0.48 0.42 0.12 0.48 0.01 - -124_16_fashion_mnist 0.90 0.83 0.84 0.12 0.85 0.10 - -124_174_cifar10_MIN 0.88 0.82 0.84 0.27 0.80 0.10 - -124_188_usps_MIN 0.96 0.95 0.94 0.26 0.92 0.18 0.11 -124_214_coil20_MIN 0.99 0.99 0.99 0.85 0.97 - - -124_95_uc_merced_land_use_MIN 0.90 - 0.72 0.52 - 0.05 0.33 -1491_one_hundred_plants_margin_MIN 0.80 0.79 0.88 0.92 0.75 0.83 0.81 0.831567_poker_hand_MIN 0.90 0.84 0.28 0.48 0.12 0.13 - 0.27185_baseball_MIN 0.66 0.70 0.65 0.68 0.68 0.67 0.66 0.64196_autoMpg_MIN 6.57 9.12 5.74 11.95 7.49 6.01 15.36 7.0322_handgeometry_MIN 0.24 0.23 0.23 0.14 0.80 0.36 0.36 -26_radon_seed_MIN 0.02 0.02 0.24 0.03 0.02 0.06 1.40 0.0227_wordLevels_MIN 0.32 0.28 0.28 0.32 0.29 0.27 0.26 0.27299_libras_move_MIN 0.98 - - 0.48 - - 0.98 0.9730_personae_MIN 0.62 0.65 0.65 0.62 0.61 0.55 0.61 -313_spectrometer_MIN 0.43 0.37 0.37 0.30 0.32 0.33 0.23 0.4031_urbansound_MIN 0.93 0.93 0.91 0.75 0.92 0.77 0.49 -32_fma_MIN 0.55 0.57 0.34 0.28 - 0.11 0.11 -32_wikiqa_MIN 0.00 0.02 0.14 0.13 0.50 - 0.13 -38_sick_MIN 1.00 1.00 - 1.00 - - 0.49 1.004550_MiceProtein_MIN 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.0049_facebook_MIN 0.88 0.87 0.87 0.87 0.87 0.88 0.44 -534_cps_85_wages_MIN 20.11 20.35 22.07 23.15 24.86 21.44 - 20.7056_sunspots_MIN 34.55 11.82 8.64 8.45 58.30 9.40 90.60 -56_sunspots_monthly_MIN 64.61 41.18 46.86 41.04 - 62.20 27.74 -57_hypothyroid_MIN 0.96 0.98 0.99 0.98 0.74 0.99 0.97 0.9859_LP_karate_MIN 0.93 0.45 0.83 0.83 0.45 0.45 0.93 -59_umls_MIN 0.92 0.94 0.94 0.94 0.94 0.70 0.73 -60_jester_MIN 4.25 - 4.24 4.15 - 4.51 - -66_chlorineConcentration_MIN 0.82 0.86 0.81 0.52 0.78 0.68 0.23 -6_70_com_amazon_MIN 0.85 0.85 - 0.85 0.85 - - -6_86_com_DBLP_MIN 0.72 0.72 - 0.72 0.72 - - -JIDO_SOHR_Articles_1061 0.98 0.94 0.94 0.81 0.56 0.60 0.64 -JIDO_SOHR_Tab_Articles_8569 1.00 0.99 1.00 1.00 0.56 1.00 1.00 -LL0_1100_popularkids_MIN 0.42 0.45 0.38 0.38 0.40 0.44 - 0.47LL0_186_braziltourism_MIN 0.14 0.35 0.36 0.17 0.24 0.20 0.34 0.16LL0_207_autoPrice_MIN 4.89·1065.76·1066.04·1063.76·1075.36·1065.43·1061.56·1085.81·106LL0_acled_reduced_MIN 0.83 0.88 0.89 0.84 0.91 0.85 0.74 0.91LL0_jido_reduced_MIN 0.90 0.89 0.91 0.90 0.90 0.90 - 0.90LL1_2734_CLIR 0.88 0.50 0.52 0.88 - - 0.50 -LL1_336_MS_Geolife_transport_MIN 0.60 1.00 0.99 - 0.85 - 0.98 -LL1_336_MS_Geolife_transport_separate 0.67 1.00 0.99 - 0.86 - 0.99 -LL1_3476_HMDB_actio_recognition_MIN 0.11 1.00 0.90 0.11 - 0.48 0.08 -LL1_50words_MIN 0.35 0.55 0.56 0.41 0.51 0.45 0.35 -LL1_726_TIDY_GPS_carpool 0.54 0.58 0.58 0.46 0.59 - 0.63 -LL1_736_population_spawn_MIN 1636.12 1806.40 1804.76 1644.26 - 2845.89 - -LL1_736_population_spawn_simpler_MIN 1346.10 1490.15 3669.54 1347.65 1323.72 1550.40 19887.20 -LL1_736_stock_market_MIN 7.64 1.49 8.69 1.75 - 30.66 - -LL1_ACLED_TOR_online_behavior_MIN 0.40 0.05 0.44 0.64 0.43 0.66 0.08 0.40LL1_Adiac_MIN 0.75 0.70 0.73 0.54 0.67 0.70 0.49 -LL1_ArrowHead_MIN 0.75 0.82 0.78 0.72 0.65 0.55 0.72 -LL1_CONFLICT_3457_atrocity 9.53 6.75 11.43 12.84 - 17.21 13.91 -LL1_Cricket_Y_MIN 0.52 0.54 0.59 0.52 0.62 0.53 0.45 -LL1_DIC28_net_MIN 0.84 0.80 0.80 0.80 0.80 0.84 - -LL1_ECG200_MIN 0.90 0.87 0.87 0.86 0.91 0.85 0.86 -LL1_EDGELIST_net_nomination_MIN 0.99 0.66 0.85 0.94 0.66 0.35 0.84 -LL1_ElectricDevices_MIN 0.54 0.42 0.46 0.06 0.44 0.27 0.31 -LL1_FISH_MIN 0.80 0.87 0.89 0.73 0.84 0.86 0.78 -LL1_FaceFour_MIN 0.84 0.83 0.71 0.55 0.65 0.40 0.66 -18(Table 4: Continued from the previous page)Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriLL1_GS_process_classification_tab_MIN 0.80 0.80 0.80 0.80 0.80 0.73 - 0.81LL1_GS_process_classification_text_MIN 0.65 0.80 0.65 0.80 0.80 0.76 0.80 -LL1_GT_actor_group_association_MIN 0.25 0.13 0.17 0.13 - - - -LL1_HandOutlines_MIN 0.89 0.91 0.90 0.88 0.88 0.88 0.88 -LL1_Haptics_MIN 0.43 0.42 0.44 0.42 0.41 0.45 0.42 -LL1_ItalyPowerDemand_MIN 0.93 0.95 0.95 0.95 0.95 0.91 0.90 -LL1_MIL_MUSK 0.68 0.77 0.83 0.67 0.80 0.80 - 0.72LL1_MIL_Mutagenesis 0.80 0.73 0.72 0.71 0.70 0.63 - 0.79LL1_MITLL_synthetic_vora_E_2538 0.29 0.53 0.52 0.50 0.31 0.44 - 0.38LL1_Meat_MIN 0.95 0.94 0.88 0.92 0.88 0.17 0.95 -LL1_OSULeaf_MIN 0.53 0.44 0.52 0.77 0.45 0.47 0.32 -LL1_PHEM_Monthly_Malnutrition_MIN 10.63 9.56 9.39 9.73 - 12.18 - -LL1_PHEM_weekly_malnutrition_MIN 3.34 4.32 3.45 2.94 - 4.23 4.18 -LL1_TXT_CLS_3746_newsgroup_MIN 0.60 0.46 0.55 0.48 0.60 0.45 0.23 -LL1_TXT_CLS_SST_Binary 0.73 0.82 0.82 0.55 - 0.51 0.53 -LL1_TXT_CLS_airline_opinion_MIN 0.81 0.80 0.81 0.80 0.81 0.72 0.72 -LL1_TXT_CLS_apple_products_sent_MIN 0.73 0.71 0.72 0.72 0.73 0.66 0.69 -LL1_VID_UCF11_MIN 0.99 0.99 0.25 0.27 - 0.02 0.08 -LL1_VTXC_1343_cora_MIN 0.61 0.04 0.22 0.17 0.04 0.13 0.52 -LL1_VTXC_1369_synthetic_MIN 0.95 0.22 0.33 0.21 0.22 0.19 0.48 -LL1_ViEWS_CM_S1 0.69 1.20 0.90 0.72 0.75 2.52 - 0.82LL1_ViEWS_PGM_S1 0.02 0.04 0.02 - 0.02 0.02 0.30 0.02LL1_bigearth_landuse_detection 0.90 0.96 0.76 0.65 0.21 - - -LL1_bn_fly_drosophila_medulla_net_MIN 0.24 0.24 - - - 0.19 - -LL1_h1b_visa_apps_7480 0.44 0.47 0.43 0.44 0.41 0.41 0.47 0.42LL1_net_nomination_seed_MIN 0.99 0.99 0.96 0.94 0.99 0.34 0.46 -LL1_penn_fudan_pedestrian_MIN 0.94 0.94 - 0.94 0.94 - - -LL1_retail_sales_total_MIN 1989.19 1921.54 1941.06 1966.30 1992.17 - 1971.76 2022.41LL1_terra_canopy_height_s4_100_MIN 113.04 68.44 39.02 52.21 - 79.86 343.27 -LL1_terra_canopy_height_s4_70_MIN 104.92 547.94 126.06 136.32 - 169.63 136.98 -LL1_terra_canopy_height_s4_80_MIN 112.95 92.95 32.57 74.59 - 111.49 74.54 -LL1_terra_canopy_height_s4_90_MIN 117.13 85.73 35.12 60.44 - 104.49 60.45 -LL1_terra_leaf_angle_mean_s4_MIN 0.04 0.09 0.05 0.04 - - 0.05 -LL1_tidy_terra_panicle_detection_MIN 0.01 0.03 - - - - - -SEMI_1040_sylva_prior_MIN 0.93 0.90 0.93 - 0.92 - - -SEMI_1044_eye_movements_MIN 0.52 0.57 0.61 0.55 0.60 0.53 0.54 -SEMI_1053_jm1_MIN 0.26 1.00 0.16 - 0.16 0.41 - -SEMI_1217_click_prediction_small_MIN 0.04 0.03 0.04 - 0.17 - - -SEMI_1459_artificial_characters_MIN 0.68 0.99 0.83 0.99 0.67 0.61 0.52 -SEMI_155_pokerhand_MIN 0.58 0.66 0.60 0.05 0.64 0.50 0.51 -kaggle_music_hackathon_MIN 21.88 17.56 19.64 24.24 21.79 - - 21.85loan_status_MIN 0.40 0.50 0.51 0.44 0.33 - 0.48 0.46political_instability_MIN 0.81 0.89 0.89 0.89 0.89 - 0.88 -uu1_datasmash_MIN 1.00 1.00 1.00 1.00 0.61 1.00 1.00 -uu2_gp_hyperparameter_estimation_MIN 0.89 0.88 0.57 0.89 - - - 0.89uu3_world_development_indicators_MIN 2.39·10105.54·10124.12·1012-4.40·1012- - -uu3_world_development_indicators_raw 7.83·10131.04·10125.22·1011- - - - -uu4_SPECT_MIN 0.00 0.92 0.92 0.90 0.89 0.90 0.78 -uu5_heartstatlog_MIN 0.70 0.69 0.72 0.62 0.61 0.72 0.67 -uu6_hepatitis_MIN 0.00 0.47 0.89 0.40 0.27 0.31 0.44 -uu7_pima_diabetes_MIN 0.59 0.57 0.60 0.57 0.60 0.63 0.57 -uu_101_object_categories_MIN 0.95 0.89 0.84 0.34 - 0.10 - -19The average rank values obtained by different AutoML systems for each task type in the D3Mdatasets can be seen in Table 5. These datasets contain a total of 17 unique ML tasks.Table 5: Average rank values by task obtained by different AutoML systems.Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriImage Classification 1.11 2.78 2.78 4.56 4.33 6.22 7.44 8.00Tabular Classification 3.75 3.30 3.35 3.85 4.85 4.65 5.85 3.55Tabular Regression 2.27 3.18 3.00 5.73 4.27 5.73 7.54 4.36Image Regression 4.00 2.00 2.00 1.00 7.00 5.00 5.00 8.00Text Classification 2.56 3.33 2.22 3.00 3.56 5.78 4.33 8.00Audio Classification 1.50 1.00 3.50 5.00 5.50 5.00 6.00 8.00Graph Matching 1.00 3.33 3.00 2.33 4.67 3.33 6.33 8.00Time series Forecasting 3.38 3.62 2.62 2.23 7.31 5.08 5.08 8.00Link Prediction 3.33 2.33 2.33 1.67 4.67 6.67 5.00 8.00Collaborative Filtering 3.00 8.00 2.00 1.00 8.00 4.00 8.00 8.00Time series Classification 3.26 2.26 2.16 4.68 3.79 5.32 4.53 8.00Community Detection 1.00 1.00 8.00 3.33 3.33 6.33 8.00 8.00Video Classification 2.50 1.00 3.00 3.50 8.00 4.50 5.50 8.00Vertex Classification 1.00 4.00 3.25 4.25 4.00 6.50 3.50 8.00Object Detection 1.50 1.00 8.00 4.50 4.50 8.00 8.00 8.00Semisupervised Classification 3.50 2.33 2.33 6.00 2.83 6.00 6.83 8.00LUPI 5.25 3.00 1.25 4.50 5.00 2.50 4.75 8.0020Table 6 and Table 7 show the raw and normalized scores (normalized by the best score) obtainedby each system on the 39 datasets of the OpenML AutoML Benchmark (Gijsbers et al., 2019).This benchmark represents real-world data science problems and covers binary and multiclassclassification tasks. Additionally, Table 6 shows the gain of AlphaD3M regarding the other systems.Table 6: Raw scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3M Gaintask_10101 0.76 0.76 0.76 0.76 0.76 0.79 0.03task_12 0.98 0.98 0.98 0.98 - 0.96 -0.01task_146195 0.88 0.71 0.86 0.88 0.85 0.81 -0.03task_146212 1.00 1.00 1.00 1.00 1.00 1.00 0.00task_146606 0.74 0.60 0.73 0.72 - 0.73 0.03task_146818 0.91 0.86 0.84 0.90 0.87 0.87 -0.01task_146821 0.99 1.00 1.00 1.00 1.00 0.97 -0.03task_146822 0.97 0.97 0.97 0.97 0.98 0.97 0.00task_146825 0.91 - 0.91 0.90 - 0.86 -0.05task_14965 0.91 0.88 0.91 0.91 0.91 0.91 0.00task_167119 0.92 0.80 0.94 0.96 0.90 0.83 -0.08task_167120 0.51 0.51 0.51 0.51 - 0.51 -0.00task_168329 0.40 0.27 0.38 0.35 0.35 0.37 0.02task_168330 0.73 0.65 0.73 0.73 0.70 0.72 0.01task_168331 0.73 0.62 0.73 0.69 0.66 0.66 -0.02task_168332 0.56 - 0.54 0.51 0.44 0.41 -0.10task_168335 0.94 - 0.94 - 0.93 0.94 -0.00task_168337 0.84 - 0.86 0.83 0.77 0.61 -0.21task_168338 1.00 - 1.00 1.00 0.99 0.97 -0.03task_168868 0.99 0.99 0.99 1.00 0.99 0.99 0.00task_168908 0.74 0.73 0.76 0.72 - 0.77 0.03task_168909 0.99 0.96 0.99 0.98 - 0.99 0.01task_168910 0.72 0.60 0.72 0.72 0.71 0.65 -0.04task_168911 0.81 0.82 0.82 0.82 0.81 0.81 -0.01task_168912 0.93 0.92 0.95 0.95 0.95 0.94 -0.00task_189354 0.67 - 0.67 0.61 0.67 0.65 -0.01task_189355 0.94 - 0.00 - - 0.88 0.41task_189356 0.71 - 0.69 - - - -task_3 0.99 0.93 0.99 1.00 0.99 0.99 0.01task_31 0.77 0.66 0.82 - 0.82 0.77 0.00task_34539 0.95 - 0.95 0.95 0.95 0.95 -0.01task_3917 0.87 - 0.86 - 0.88 0.86 -0.01task_3945 0.98 - 0.98 0.98 0.98 0.98 0.00task_53 0.86 0.67 0.85 0.88 - 0.82 0.01task_7592 0.87 0.87 0.87 0.86 0.87 0.87 0.00task_7593 0.97 0.66 0.96 0.80 - 0.95 0.10task_9952 0.88 0.91 0.90 0.90 0.91 0.91 0.01task_9977 0.98 0.95 0.97 0.98 0.97 0.96 -0.00task_9981 0.94 0.86 0.96 0.94 0.96 0.94 0.0121Table 7: Normalized scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3Mtask_10101 0.97 0.97 0.97 0.97 0.97 1.00task_12 0.99 1.00 0.99 0.99 - 0.98task_146195 1.00 0.81 0.98 1.00 0.97 0.92task_146212 1.00 1.00 1.00 1.00 1.00 1.00task_146606 1.00 0.82 1.00 0.98 - 0.99task_146818 1.00 0.94 0.92 0.98 0.95 0.95task_146821 0.99 1.00 1.00 1.00 1.00 0.97task_146822 1.00 0.99 1.00 1.00 1.00 1.00task_146825 1.00 - 0.99 0.99 - 0.94task_14965 1.00 0.96 1.00 1.00 1.00 1.00task_167119 0.96 0.83 0.98 1.00 0.94 0.86task_167120 1.00 1.00 1.00 0.99 - 0.99task_168329 1.00 0.69 0.96 0.88 0.89 0.94task_168330 1.00 0.89 1.00 1.00 0.97 0.98task_168331 1.00 0.84 1.00 0.95 0.90 0.91task_168332 1.00 - 0.98 0.93 0.80 0.75task_168335 1.00 - 1.00 - 0.99 0.99task_168337 0.98 - 1.00 0.97 0.89 0.71task_168338 1.00 - 1.00 1.00 0.99 0.97task_168868 1.00 0.99 1.00 1.00 1.00 1.00task_168908 0.97 0.96 0.99 0.94 - 1.00task_168909 1.00 0.97 1.00 0.99 - 1.00task_168910 1.00 0.83 1.00 1.00 0.98 0.90task_168911 0.99 1.00 1.00 1.00 0.99 0.98task_168912 0.98 0.97 0.99 1.00 1.00 0.98task_189354 1.00 - 1.00 0.91 1.00 0.96task_189355 1.00 - 0.00 - - 0.94task_189356 1.00 - 0.97 - - -task_3 1.00 0.94 1.00 1.00 1.00 1.00task_31 0.94 0.80 1.00 - 1.00 0.94task_34539 1.00 - 1.00 1.00 0.99 0.99task_3917 0.99 - 0.98 - 1.00 0.98task_3945 1.00 - 1.00 0.99 1.00 1.00task_53 0.97 0.76 0.96 1.00 - 0.93task_7592 1.00 0.99 1.00 0.99 1.00 1.00task_7593 1.00 0.68 0.99 0.82 - 0.97task_9952 0.96 0.99 0.98 0.98 1.00 0.99task_9977 1.00 0.97 1.00 1.00 1.00 0.99task_9981 0.98 0.89 1.00 0.98 1.00 0.9822
hDNJXKdCcYS
71eJdMzCCIi
automl.cc/AutoML/2023/ABCD_Track
2023
AlphaD3M: An Open-Source AutoML Library for Multiple ML Tasks
["Roque Lopez", "Raoni Lourenco", "Remi Rampin", "Sonia Castelo", "A\u00e9cio S. R. Santos", "Jorge Henrique Piazentin Ono", "Claudio Silva", "Juliana Freire"]
We present AlphaD3M, an open-source Python library that supports a wide range of machine learning tasks over different data types. We discuss the challenges involved in supporting multiple tasks and how AlphaD3M addresses them by combining deep reinforcement learning and meta-learning to effectively construct pipelines over a large collection of primitives. To better integrate the use of AutoML within the data science lifecycle, we have built an ecosystem of tools around AlphaD3M that support user-in-the loop tasks, including the selection of suitable pipelines and the development of solutions for complex systems. We present use cases that demonstrate some of these features. We report the results of detailed experimental evaluations which show that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance that is comparable or superior to state-of-the-art AutoML systems.
["AutoML", "Python Library", "Multiple ML Tasks"]
AlphaD3M: An Open-Source AutoML Libraryfor Multiple ML TasksRoque Lopez1Raoni Lourenço2Remi Rampin1Sonia Castelo1Aécio Santos1Jorge Ono1Claudio Silva1Juliana Freire11New York University2University of LuxembourgAbstract We present AlphaD3M, an open-source Python library that supports a wide range of machinelearning tasks over different data types. We discuss the challenges involved in supportingmultiple tasks and how AlphaD3M addresses them by combining deep reinforcement learningand meta-learning to construct pipelines over a large collection of primitives effectively.To better integrate the use of AutoML within the data science lifecycle, we have builtan ecosystem of tools around AlphaD3M that support user-in-the-loop tasks, includingselecting suitable pipelines and developing custom solutions for complex problems. Wepresent use cases that demonstrate some of these features. We report the results of adetailed experimental evaluation showing that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance comparable or superior tostate-of-the-art AutoML systems.1 IntroductionAutomated Machine Learning (AutoML) has emerged as an alternative to automatically synthesizemachine learning (ML) pipelines, thereby democratizing ML techniques to non-experts as wellas increasing the productivity of data scientists. Different approaches have been proposed forAutoML systems. Some focus on specific components of an ML pipeline, such as hyperparameteroptimization or model selection, while others, given a dataset and a prediction task, generateend-to-end pipelines that encompass data pre-processing, feature, and model selection (Hutteret al., 2019). Most end-to-end systems are designed to work with tabular data and only supportclassification and regression problems (Feurer et al., 2015; LeDell and Poirier, 2020; Olson and Moore,2016; Kotthoff et al., 2017). Cloud AutoML (Google Cloud AutoML, 2020) and AutoGluon (Ericksonet al., 2020) also create pipelines to classify text and images and perform object detection tasks.However, these systems do not support more complex data types such as graphs, time series, audio,and video, limiting the types of problems they can address. Table 1 shows the set of task typessupported by different AutoML systems.In the context of DARPA’s Data-Driven Discovery of Models (D3M) program (Elliott, 2020),several AutoML systems have been developed to support a wide range of data types and MLtasks using an extensive set of computational primitives as building blocks – we refer to theseasmulti-task AutoML systems (MT-AutoML). MT-AutoML systems face an essential challenge:effectively searching an ample space of primitives required to synthesize pipelines for a broadrange of tasks and data types. To prune the search space, many D3M MT-AutoML systems usemanually-crafted templates and grammars (D3M, 2022) that prescribe combinations of primitivesthat make sense for different problems. This, in turn, leads to other challenges: creating thesetemplates or grammars is not only time-consuming but failing to include the necessary rules thatcover the relevant primitives (and their combination) for multiple task types can negatively impactthe ability of an MT-AutoML system to derive performant pipelines.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Table 1: Tasks supported by different AutoML Systems.SystemsTabularClassificationTextclassificationImageclassificationAudioclassificationVideoclassificationTabularRegressionClusteringTime seriesforecastingTime seriesclassificationObjectdetectionLUPICommunitydetectionLinkpredictionGraphmatchingVertexclassificationCollaborativefilteringSemisupervisedclassificationAutoGluon ✓✓✓ ✓ ✓ ✓AutoWEKA ✓ ✓Auto-Sklearn ✓ ✓Cloud AutoML ✓✓✓ ✓✓ ✓H2O ✓✓ ✓TPOT ✓ ✓AlphaD3M ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓ ✓✓✓ ✓✓✓✓✓✓ ✓✓✓We present AlphaD3M, an open-source AutoML library1that supports a wide range of dataand problem types (see Table 1). AlphaD3M introduces new techniques to navigate the large searchspaces MT-AutoML systems must navigate effectively. They include an algorithm that appliesmeta-learning to automatically derive task-based context-free grammars (CFGs) which cover amultitude of problems; and a novel search strategy that, based on previously generated pipelinesand their performance, prioritizes primitives that are correlated with good pipeline performance.AlphaD3M includes components that aim to support usability and integration with other tasksin the data science lifecycle, from data exploration and model summarization to model deployment.It is possible to extend AlphaD3M and combine it with other tools through its flexible API. Forexample, its integration with the PipelineProfile (Ono et al., 2021) allows users to explore andcompare the set of derived pipelines visually. Besides describing the API and these components, wealso present case studies demonstrating how users can improve the ML solutions via interaction inAlphaD3M.We conducted a detailed experimental evaluation to assess the ability of AlphaD3M to handlea rich set of tasks and data types as well as to compare its performance against state-of-the-artAutoML and MT-AutoML systems. We used two benchmarks: (a) a collection of 112 datasetsthat covers seventeen different ML tasks, and (b) the OpenML AutoML Benchmark for tabularclassification problems. Our results show that the search strategies used by AlphaD3M are effective:the system generates pipelines whose performance is superior or on par with those derived byother systems, including systems that focus on a small set of problems and have to navigate a muchsmaller search space.2 Related WorkTask Coverage. Many AutoML systems have been proposed to work with tabular data, for example:Auto-sklearn (Feurer et al., 2015), TPOT (Olson and Moore, 2016), and H2O (LeDell and Poirier,2020). The deep reinforcement learning algorithm proposed by Drori et al. (2019) aimed to supportmultiple learning tasks and data types, however, its implementation was limited to classificationand regression tasks over tabular and text data. AutoML systems developed in industry, such asCloud AutoML by Google and AutoGluon by Amazon, handle text and image data, but still supporta limited number of learning tasks. In contrast, AlphaD3M supports a wide range of data types(tabular, text, images, audio, video, and graph) and a rich set of ML tasks as shown in Table 1.Data and Model Exploration. Interactive data analytics systems such as Visus (Santos et al., 2019),TwoRavens (Gil et al., 2019), and Snowcat (Cashman et al., 2018) have been developed to guideusers throughout the model-building process, from exploring the input data to comparing the MLpipelines produced by AutoML systems. They target primarily domain experts who have little or1https://gitlab.com/ViDA-NYU/d3m/alphad3m2no expertise in ML and thus lack support for the customization of pipelines for complex problems.These systems trade off flexibility for ease of use. As such, they are limited to the operationsimplemented in their visual interfaces; extensive and time-consuming changes in their workflowsare required to support new data types and tasks (e.g., graph data). Other approaches mimic theinterface of traditional ML libraries, through which developers often build a single solution for agiven task (Grafberger et al., 2021). AlphaD3M allows ML experts to explore the derived pipelinesand customize them through a user-friendly interface within a Jupyter Notebook environment. Inaddition, instead of retrieving only the best pipeline, AlphaD3M returns all valid pipelines, ranks,and presents them to the user for comparison, refinement, and selection.3 The AlphaD3M LibraryFigure 1: Overview of AlphaD3M.AlphaD3M is a multi-task Au-toML system. It is imple-mented in Python and canbe used via pipinstallationor Docker. Figure 1 showsan overview of this libraryand its components. Tobuild ML pipelines, AlphaD3Muses a rich set of primitivesand a meta-learning databasefrom the D3M ecosystem D3M(2022). The pipeline search is conducted by four modules which: (a) automatically construct oftask-specific grammars; (b) prioritize primitives that are more likely to be effective; (c) synthesizepipelines using Monte Carlo Tree Search and Neural Networks (Drori et al., 2019); and (d) tunehyperparameters. The library implements a Python API through which users can define the problemto be solved, explore the input data, obtain model summaries, analyze and compare the producedpipelines, as well as improve and deploy them.3.1 The D3M EcosystemPrimitives. AlphaD3M uses a comprehensive collection of primitives developed by performersin the D3M program as well as from open-source libraries (e.g., scikit-learn). In total, there are312 primitives available for different steps in ML pipelines, including data pre-processing, featureextraction, feature selection, prediction, and clustering (D3M Primitives, 2022), and implementstate-of-the-art methods, such as ResNet50 (He et al., 2016), ARIMA (Wilson, 2016), among others.The Marvin Meta-Learning Database. Marvin is an open corpus of curated ML pipelines, datasets,and problems (Marvin, 2020). All pipelines in Marvin share the same set of primitives and arespecified using the D3M format. Marvin stores approximately 2.5 million pipelines executed over600 datasets. Since data scientists and AutoML systems that use different search strategies haveproduced these pipelines, the database covers a wide variety of pipeline patterns. As discussedbelow, we leverage the data in Marvin to assist in and improve the AlphaD3M search process. Tothe best of our knowledge, ours is the first work that explores this corpus.3.2 Pipeline SearchThe automatic synthesis of pipelines is a combinatorial problem in which we must find the bestcombinations of primitives and their hyperparameters. With 312 primitives and over 1,500 hy-perparameters in the D3M ecosystem, the search space becomes prohibitively large. For instance,considering just the classification task over tabular data, there are 22 data cleaning, 87 data trans-formation, and 44 classifier primitives, leading to 84,216 possible pipelines to test. AlphaD3M usesa multi-pronged approach to manage this search space described below.3APipeline Synthesis Using Monte Carlo Tree Search and Neural Networks. To synthesize theML pipelines, AlphaD3M uses the strategy introduced by Drori et al. (2019), which is based on asingle-player game technique inspired by AlphaZero (Silver et al., 2017). It applies model-basedreinforcement learning with a neural network sequence model, and a Monte Carlo Tree Search(MCTS). The metadata encoding the pipeline, the dataset, and the task are analogous to an entiregame board configuration in AlphaZero. The possible game states consist of all valid pipelinesgenerated from a set of primitives and modified by actions guided by a manually-designed CFG.The model outputs a sequence of primitives. Pipelines are constructed by an LSTM. Given a state scomposed of a vector encoding the whole board configuration (dataset, task, pipeline), the neuralnetwork predicts the probabilities P(s,a)over actions afrom a state s. This process produces aset of action sequences Sthat describe a pipeline, which in turn solves task Ton datasetD. Thenetwork also outputs an estimate of pipeline performance v. The reinforcement learning algorithmtakes the predictions (P(s,a),v(s))produced be the neural network and uses them in the MCTS byrunning multiple simulations to search for the pipeline sequence Rwith the best evaluation. Animportant benefit of this strategy is that it learns to synthesize pipelines.BAutomatic Generation of Task-Based CFG via Meta-Learning. Manually designed CFGs havemany limitations, notably they may not cover all applicable rules and pipeline structures andconsequently prevent the search process from exploring desirable pipelines that do not fit thegrammar. Furthermore, to create the production rules or patterns in the grammar, a user needsto have knowledge of all the available primitives for a specific task and how they work. For largeprimitive collections, this is a difficult task, which is compounded for MT-AutoML systems thatsupport multiple problem types. Instead of relying on manually created CFGs, we propose a newstrategy that uses meta-learning to derive grammars automatically and on the fly. It does so in twosteps: 1) it selects task-specific pipelines and datasets from a meta-learning database (MLDB), and2) uses these to derive a portfolio of pipeline patterns.Selecting Task-Oriented Datasets. Since AlphaD3M supports different tasks, we need to retrievefrom the Marvin MLDB pipelines produced for tasks and datasets similar to the ones we provided asinputs to the AutoML system. For instance, if we want to solve a clustering problem over a datasetD, we retrieve the pipelines used for this problem over datasets similar to D. To select relevantpipelines for a given problem Pover dataset D, we use the “task keywords" tag list provided in theproblem definition as features that describe the task to be solved, and search Marvin for pipelinesthat contain a similar set of keywords. The list is encoded as a bag-of-words (BOW). Since the setis small and most of the tags are non-standard words, e.g., collaborativeFiltering, timeSeries , it ispossible to obtain accurate matches with this simple approach.Given the set of relevant pipelines RP, we select a subset RPDcontaining pipelines that wereapplied on datasets similar to D. To determine whether two datasets are similar, we use datasetfeatures including semantic types (e.g., categorical, date-time) and missing values, and encode themusing one-hot encoding. Datasets are compared using cosine similarity.The current implementation uses 16 unique semantic types detected by the data-mart_profiler (Datamart Profiler Library, 2021). In contrast to other approaches like TabSim(Habibi et al., 2020), or StruBERT (Trabelsi et al., 2022), AlphaD3M uses semantic types because, inthe grammar, it defines components to handle the dataset’s features, such as categorical or date-timeencoders, and these components are strongly related to semantic types. Also, these approachesfocus on tabular datasets, AlphaD3M handles other types of datasets, like image and text datasets.Finally, running these approaches is a very time-consuming task.Creating a Portfolio of Patterns. After identifying similar datasets, the next step is to select the bestpipelines to create a portfolio of pipeline patterns. To select these AlphaD3M takes into considerationpipeline performance for different datasets. Some datasets are more challenging than others – theperformance of a pipeline can vary widely for different datasets. To properly compare pipeline4performance, AlphaD3M uses a strategy based on the average distance to minimum (ADTM) (Wistubaet al., 2015), which transforms the performance to the distance to the best-observed performancescaled between 0 and 1. In contrast to ADTM, which uses the misclassification rate, AlphaD3Muses the actual performance (the score) of the pipelines and thus, it applies the average distance tomaximum instead to select the best pipelines. It then transforms the primitives within the pipelinesto their classes. For instance, the primitive imputer.SKlearn belongs to the class IMPUTATION . Ifthere is a pipeline with this structure: [ imputer.SKlearn svm.SKlearn ], it is converted to this pattern:[IMPUTATION CLASSIFICATION ]. Unlike Feurer et al. (2021), which creates a unique portfolioof pipelines in an offline phase, AlphaD3M creates the portfolio online, based on the query taskand dataset. Also, the output is a portfolio of patterns, not of static pipelines, which allows moreflexibility to construct pipelines. These patterns are used as production rules of the grammar.Algorithm 1 in the Appendix describes the process of building the grammar.CPrioritization of Primitives. When a data scientist builds an ML pipeline, they start this processusing primitives that are known to perform well. For example, XGBoost or Random Forests aregood initial candidates for classification tasks. AlphaD3M follows this intuition to identify goodcandidate primitives for a specific task, using the data from Marvin. This prior knowledge aboutpromising primitives can be helpful to find better pipelines faster.Similar to Ono et al. (2021), AlphaD3M uses Pearson Correlation (PC) to estimate how mucha primitive contributes to the score of the pipeline. However, instead of using the raw scores, ituses the ADTMs values because they are scaled across different datasets. AlphaD3M estimatesthe primitive importance using PC between the primitive indicator vector p(pi=1if pipelineicontains the primitive in question and pi=0otherwise) and the pipeline score vector s, wheresiisthe score for pipeline i. Sincepandsare dichotomous and quantitative variables, respectively, thePoint-Biserial Correlation coefficient (PBC) Sheskin (2003) is an appropriate correlation measure – itis mathematically equivalent to the PC but can be calculated with fewer operations. The correlationvalues are normalized between 0 and 1 (using min-max normalization).AlphaD3M calculates these correlations for the primitives at two levels: (a) global, when itconsiders all the pipelines, and (b) local, when it considers only the pipelines for each pattern.The main goal is to estimate how important a primitive is for all the pipelines and each pattern.Primitives with higher values of importance should have priority during the search of pipelines.Algorithm 2 describes the process of calculating the primitive importance values in detail (see theAppendix). To prioritize the usage of potential primitives in AlphaD3M, it includes these values ofprimitive importance in the MCTS formula:U(s,a)=Q(s,a)+c(αP(s,a)+( 1−α)R(a))√︁N(s)1+N(s,a)(1)whereQ(s,a)is the expected reward for action a(selection of primitive a) from state s,N(s,a)isthe number of times action awas taken from state s,N(s)is the number of times state swas visited.P(s,a)are the probabilities predicted by the neural network over actions afrom a state s,cis aconstant which determines the amount of exploration, R(a)=G(a)∗L(a),G(a)andL(a)are theglobal and local importance of the action a, andαis a coefficient to keep the trade-off betweenR(a)andP(s,a).DDecoupled Hyperparameter Tuning. Hyperparameter tuning is an essential part of fitting machinelearning models (Bergstra et al., 2011; Snoek et al., 2015; Dolatnia et al., 2016). This is also the casefor end-to-end ML pipelines that target different tasks, and all primitives contain hyperparameters,not just the estimators.AlphaD3M performs hyperparameter tuning as an independent task, after the pipelines areconstructed. It uses Bayesian optimization, which is the state-of-the-art for hyperparameter tuning5Figure 2: (a) A code snippet to solve a semi-supervised classification task. (b) AlphaD3M allows usersto inspect the contents of the input dataset, including column statistics and data types. (c)Analyzing ML pipelines through the integration with PipelineProfiler.(Bergstra and Bengio, 2012; Snoek et al., 2015; Dolatnia et al., 2016) and was shown to outperformmanual setting of parameters, grid search, and random search (Bergstra and Bengio, 2012; Turneret al., 2021).Tuning Top- kPipelines. AlphaD3M synthesizes and evaluates the pipelines using primitives withdefault values for hyperparameters. The pipelines are then ranked by performance, and the top-kpipelines are selected for tuning. AlphaD3M uses Sequential Model-Based Algorithm Configuration(SMAC) (Lindauer et al., 2022), a Python library for Bayesian optimization. It approximates aprobability model of the performance outcome given a parameter configuration that is updatedfrom a history of executions. AlphaD3M selects the Gaussian Processes models from SMAC tominimize an arbitrary acquisition function using the Expected Improvement criterion to choose theparameter values for each iteration until a condition (number of iterations) is met. The acquisitionfunction is designed to normalize the performance metric used to synthesize the pipelines betweenzero and one, as the pipeline execution evaluations increase, the acquisition function gets closer tozero. SMAC requires a set of unique parameters to assign values during its tuning procedure. SinceAlphaD3M considers multiple primitives with identical names, it constructs an internal hierarchicalnomenclature of parameters and designs their dependencies using ConfigSpace.3.3 The APIWe have developed a Python-based API that supports the process of building and exploration of MLpipelines within a Jupyter Notebook environment. The API is integrated with the D3M AutoMLsystems and supports various dataset formats such as raw CSV, D3M, and OpenML. Model synthesiscan be done with a few lines of code, as shown in Figure 2(a). The API allows users to (a) define aproblem, (b) explore summaries of their input dataset, (c) summarize the produced pipelines and (d)analyze and compare pipelines with respect to their performance scores and prediction outputs.We describe the main components of the API below.Problem Definition. To build a predictive model, AlphaD3M needs a problem specification thatdescribes a prediction problem, specifically: (a) the training dataset; (b) a target variable, i.e., whatshould be predicted by the predictive model; (c) the maximum running time that controls how longthe search can take (to control the use of computational resources); (d) the desired performancemetric; and (e) a list of task keywords that specify the kind of prediction task and, therefore, thetechniques that should be used to solve the prediction problem. Figure 2(a) shows an example ofhow to define a problem in AlphaD3M.6Table 2: Comparison of MT-AutoML systems with respect to the number of supported task types,winner pipelines, and average rank by each system.AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Unique ML tasks supported 17 16 15 17 15 16 14 2Winner pipelines 49 39 30 21 20 11 10 7Average rank 2.85 2.89 2.90 3.99 4.68 5.32 5.73 6.85Data Exploration. To build good predictive models, it is important to identify data attributes thatlead to accurate predictions. The API provides multiple tools for data exploration. For example, itshows different visualizations (compact, detail, and column views) that summarize the content oftabular datasets (see Figure 2 (b)).Pipeline Summary. After the pipeline search is complete, users can display a leaderboard, trainindividual pipelines with the complete data, perform predictions and evaluate them against aheld-out dataset.Pipeline Exploration. Users can analyze the produced pipelines using the PipelineProfiler Onoet al. (2021), which is fully integrated into AlphaD3M as shown in Figure 2(c). PipelineProfiler isa visual analytics tool that enables users to compare and explore the pipelines generated by theAutoML systems.Pipeline Refinement and Deployment. AlphaD3M allows users to save and load pipelines, enablingusers to reload them later and perform analyses without having to re-run the AutoML search.They can load the saved pipelines at any time for training or testing purposes. In addition, userscan export pipelines to Python code. This gives them more control and the ability to modify(and customize) the automatically generated pipelines (e.g., change hyperparameters, or replacea classifier primitive). More information about the API can be found on the documentation webpage: https://alphad3m.readthedocs.io/en/latest/api.html .4 EvaluationTo demonstrate the effectiveness of AlphaD3M and its ability to handle a rich set of ML tasks, wecompared AlphaD3M with state-of-the-art AutoML systems using two dataset collections. We alsopresent use cases to show how useful, flexible, and easy to use AlphaD3M is.4.1 Comparing AutoML SystemsD3M Datasets. This collection contains challenging datasets and cover a wide variety of tasks (atotal of 17 task types) and data types (see Table 3). We evaluated all the systems using train and testsplits. In most of the cases, the sizes are 0.8 and 0.2 for the train and test splits, respectively (see thedataset’s repository2for details). For each dataset, we ran the systems over the train split for onehour, a time-bound used by others works (Erickson et al., 2020; Feurer et al., 2021). After that, weevaluated the best pipeline produced by each system in the test split. For this experiment, we used1 GPU (GeForce GTX 1080 Ti), 14 CPU cores (Intel Xeon E5-2695 v4, 2.10 GHz), and 56 GB memory.Table 2 shows the number of supported task types (ML tasks), winner pipelines (i.e., pipelineswith the best performance for a given dataset), and the average rank by each AutoML system (rankof each system among the 8 AutoML systems applied to each dataset). If two or more systemsproduce pipelines that tie in the best score, all of them are considered winner pipelines. As we cansee, AlphaD3M and Aika were able to solve 17 out of 17 unique tasks, obtaining the best coverage.We also evaluated the effectiveness of AlphaD3M. It had the best overall performance, producingthe best pipeline for 49 datasets with the best average rank (2.85). Analyzing the support for each2https://datasets.datadrivendiscovery.org/d3m/datasets7Table 3: Number of datasets by task type and number of solved datasets by each AutoML system forall task types covered by the D3M datasets.ML Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Tabular Classification (20) 20 19 18 20 18 17 13 20Tabular Regression (11) 11 11 11 8 9 6 5 9Image Classification (9) 9 8 9 9 7 7 2 0Image Regression (1) 1 1 1 1 1 1 1 0Text Classification (9) 9 9 9 9 8 8 9 0Audio Classification (2) 2 2 2 2 1 2 2 0Graph Matching (3) 3 3 3 3 2 2 2 0Time series Forecasting (13) 13 13 13 13 2 12 10 0Link Prediction (3) 3 3 3 3 2 2 2 0Collaborative Filtering (1) 1 0 1 1 0 1 0 0Time series Classification (19) 19 19 19 17 19 15 19 0Community Detection (3) 3 3 0 2 2 1 0 0Video Classification (2) 2 2 2 2 0 2 2 0Vertex Classification (4) 4 4 4 4 4 4 4 0Object Detection (2) 2 2 0 1 1 0 0 0Semisupervised Classification (6) 6 6 6 3 6 4 3 0LUPI (4) 4 4 4 4 4 4 4 0task type individually in Table 3, we can see that AlphaD3M was able to produce valid pipelinesfor all the datasets and it solved more datasets than the other systems. Even though AlphaD3M isinspired by Drori et al. (2019), in Table Table 2 and Table 3, we can clearly see the difference betweenthem, AlphaD3M handles a larger number of tasks and produces many more winned pipelines.This shows that the different components of AlphaD3M are effective at handling the larger searchspaces required by MT-AutoML systems. The detailed scores obtained by each system in all theD3M datasets and the average rank by tasks can be found in Table 4 and Table 5 (Appendix).Additionally, we calculated the number of winner pipelines for the top-3 systems only in thedatasets where all of them produced pipelines. AlphaD3M, Ensemble, and AutonML systems got 48,42, and 38, respectively. These results confirm that the superior performance of AlphaD3M is notsolely due to its support for a broader range of ML tasks.Figure 3: Ablation study for the different components of AlphaD3M.We performed an ablationstudy to analyze the contribu-tion of each component of Al-phaD3M on a random sample offive D3M datasets for classifica-tion tasks2(datasets for whichAlphaD3M obtained the best, av-erage and worst performances).Figure 3 shows the best scoresfor each dataset reached by thefull AlphaD3M and the versionswith some components removed(or replaced). As we can see, us-ing all components leads to thebest results.To evaluate the importance of the automatic grammar, we replaced it with the manually-designed grammar used in Drori et al. (2019). For POKER ,SPECTRO ,WORDS , and SICK datasets,when the manual grammar was used, AlphaD3M was not able to produce valid pipelines, whichhighlights the importance of automatically generating the grammar. These datasets contain multi-ple types of features like text, DateTime, etc., which were not covered by the manually-constructed8Figure 4: Performance of AutoML systems in OpenML Benchmark. X-axis shows the accuracy values(normalized by the best score), and Y-axis shows the IDs of the OpenML tasks.grammar. The prioritization of primitives also plays an important role in AlphaD3M. When thisfeature was not used, the performance decreased, e.g. in POKER ,SPECTRO , and LIBRAS datasets. Aswe can see in Figure 3, in most of the datasets, when we removed the hyperparameter tuning com-ponent, AlphaD3M obtained the same results. This suggests that the heuristic used by AlphaD3M(tuning only the top- kpipelines) may miss good pipelines that would attain better performanceafter tuning. In future work, we plan to investigate alternative strategies for hyperparameter tuningthat attain a better balance of computational cost and pipeline performance.OpenML Benchmark. Similar to Erickson et al. (2020), we compared our system with AutoWEKA,TPOT, H2O, AutoGluon, and Auto-Sklearn 2.0 (hereinafter referred to as Auto-Sklearn) on the 39OpenML datasets (Gijsbers et al., 2019). This corpus contains a variety of datasets intended torepresent real-world data science problems and covers binary and multiclass classification tasks.We used AMLB (Gijsbers et al., 2022) to compare the systems, running them locally for one hourusing 1 fold split and accuracy as the optimization metric. For this experiment, we used 4 CPUcores (Intel Xeon Platinum 8268 Processor, 2.9 GHz) and 32 GB memory.Figure 4 shows the scores (normalized by the best score) of all the systems (the detailed scorescan be found in Tables 6 and 7 in the Appendix). As we can see, AlphaD3M produced pipelineswhose performance is on par with the other AutoML systems. We also calculated the averagerank for all the systems for the 39 datasets. AlphaD3M got 3.64 of average rank, while Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA got 2.08, 2.33, 3.08, 3.72, and 5.10, respectively.To understand better these numbers, we also estimated the performance gain of the pipelines foundby AlphaD3M against pipelines generated by other systems. The average gain of AlphaD3M forthe OpenML datasets was +0.001, which shows that, in general, AlphaD3M attained good resultsfor this collection. We analyzed the 3 datasets ( task_146195 ,task_167119 andtask_168331 ) forwhich AlphaD3M generated pipelines with performance lower than other systems. This happenedbecause these datasets are imbalanced with multiple classes. The performance of AlphaD3M forthese could be improved with the inclusion of primitives to handle imbalanced datasets. Thisunderscores the importance of being able to add primitives to AutoML systems.Concerning the coverage, it is important to highlight that AlphaD3M succeeded for 38 datasets.Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA solved 39, 39, 34, 29, and 28 datasets,respectively. As pointed out by Gijsbers et al. (2022), the results of Auto-Sklearn on the OpenMLdatasets must be considered very carefully, since there could be an overlap between the datasetsused in its meta-learning process and the ones used in the evaluation. It’s important to highlightthat none of the OpenML datasets are included in the version of Marvin that was used by AlphaD3Min these experiments.94.2 Use CasesPivoting across ML tasks. Predicting hostile actions against ships and mariners worldwide isimportant to prevent piracy and prosecute the aggressors. Consider that an analyst from the U.S.National Geospatial-Intelligence Agency (NGA) is building a model using the Anti-Shipping ActivityMessages dataset (ASAM, 2021). She wants to identify which records mention guns and whichrecords do not. This is a non-trivial problem since a variety of terms (e.g., pistol, rifle, etc.) indicatewhether a gun is present. This dataset contains 8,000 documents, of which 1,400 were annotated.She started by using AlphaD3M to create models using the 1,400 labeled documents setting themodel search to 1 hour. AlphaD3M derived high-quality pipelines – the best pipeline had 0.90 ofF1. However, she wondered whether these pipelines could be further improved, in particular, byleveraging the 6,600 unlabeled documents through semi-supervised learning. AlphaD3M supportsa wide range of tasks, including semi-supervised learning – users just need to add the keyword“semiSupervised” as a parameter. The user then ran a new experiment using the 1,400 labeled and6,000 unlabeled instances as a training dataset. The results improved from 0.90 to 0.95 of F1. Theseexperiments show that by using AlphaD3M, data scientists can improve the results, pivoting fromone task (classification) to another (semi-supervised classification) very quickly.Reducing pipeline execution time through models exploration. Using content analysis andpredictive modeling for conflict assessment is a common approach for conflict analysts to guidepolicy-making decisions D’Orazio (2020). Consider a conflict analyst trying to categorize explosionevents that involve terrorist activities. She uses the explosion events dataset (Raleigh et al., 2010)that contains 20,000 articles describing events that involve terrorist activities. An article is relevantif it describes attacks involving explosions. To create classification models, she ran AlphaD3M for 1hour. The system synthesized high-quality pipelines, with F1 values around 0.9. To identify themost suitable pipeline, she used the PipelineProfiler to explore the derived models. She observedthat the top-10 pipelines had similar scores but their execution times were above 800 seconds. Toaddress this problem, she tried a different strategy: combining progressive sampling and activelearning to reduce the number of training data from 20,000 to 3,200 documents. Then, she re-ranAlphaD3M using the smaller set as the training dataset, while keeping the rest of the workflowunchanged. The top F1 score improved from 0.91 to 0.96 and the time from 800 to 125 seconds.5 ConclusionsWe introduced AlphaD3M, an MT-AutoML library that automatically synthesizes end-to-endpipelines for 17 ML tasks and 6 different data types. AlphaD3M introduces new methods to auto-matically derive grammars and prioritize primitives, which are essential for effectively managingthe large space MT-AutoML systems must search. In addition, AlphaD3M embraces a user-in-the-loop approach, through an API that allows the users to explore the input data and the derived MLpipelines, as well as customized the pipelines. We presented a detailed experimental evaluationthat compares our approach to several state-of-the-art AutoML systems over different problemsand datasets. The results suggest that AlphaD3M is effective: not only does it solve a larger numberof problem types, but it also derives pipelines with performance that is superior or on par withthose derived by other systems.Although AlphaD3M’s approach is primitive-agnostic, so far, it only relies on the D3M primitivesto build ML pipelines. We plan to extend AlphaD3M by including additional state-of-the-artand more-recent primitives, e.g., models published in HuggingFace or PyTorch Hub repositories.Moreover, we would like to improve the system interoperability with existing open-source primitivesthat use standard APIs such as the well-known scikit-learn’s fit-predict API.Acknowledgements. This work was partially supported by the DARPA D3M program. Anyopinions, findings, conclusions, or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of DARPA.10ReferencesASAM (2021). ASAM: Anti-Shipping Activity Messages. https://msi.nga.mil/Piracy .Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for Hyper-Parameter Opti-mization. In Proceedings of NIPS , pages 2546–2554.Bergstra, J. and Bengio, Y. (2012). Random Search for Hyper-parameter Optimization. JMLR , pages281–305.Cashman, D., Humayoun, S. R., Heimerl, F., Park, K., Das, S., Thompson, J., Saket, B., Mosca, A.,Stasko, J. T., Endert, A., Gleicher, M., and Chang, R. (2018). Visual Analytics for AutomatedModel Discovery. CoRR .D3M (2022). D3M Website. https://datadrivendiscovery.org .D3M Primitives (2022). D3M Primitives Website. https://gitlab.com/datadrivendiscovery/primitives/-/tree/master/primitives .Datamart Profiler Library (2021). Datamart Profiler Website. https://pypi.org/project/datamart-profiler/ .Dolatnia, N., Fern, A., and Fern, X. (2016). Bayesian Optimization with Resource Constraints andProduction. In Proceedings of ICAPS , pages 115–123.D’Orazio, V. (2020). Conflict Forecasting and Prediction. In Oxford Research Encyclopedia ofInternational Studies . Oxford University Press.Drori, I., Krishnamurthy, Y., Lourenco, R., Rampin, R., Cho, K., Silva, C., and Freire, J. (2019).Automatic Machine Learning by Pipeline Synthesis using Model-based Reinforcement Learningand a Grammar. In 6th ICML Workshop on Automated Machine Learning .Elliott, J. (2020). DARPA Data-Driven Discovery of Models (D3M) Program. https://www.darpa.mil/program/data-driven-discovery-of-models .Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 .Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2021). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M.,and Garnett, R., editors, Advances in Neural Information Processing Systems , volume 28. CurranAssociates, Inc.Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren,J. (2022). Amlb: an automl benchmark.Gijsbers, P., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J. (2019). An Open SourceAutoML Benchmark. In 6th ICML Workshop on Automated Machine Learning .Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., Gadewar, S., Yang, Q., and Jahanshad, N.(2019). Towards Human-guided Machine Learning. In Proceedings of the Conference on IntelligentUser Interfaces (IUI) , pages 614–624. ACM.11Google Cloud AutoML (2020). Google Cloud AutoML Website. https://cloud.google.com/automl .Grafberger, S., Guha, S., Stoyanovich, J., and Schelter, S. (2021). MLINSPECT: a Data DistributionDebugger for Machine Learning Pipelines. age, 20:123.Habibi, M., Starlinger, J., and Leser, U. (2020). Tabsim: A Siamese Neural Network for AccurateEstimation of Table Similarity. In 2020 IEEE International Conference on Big Data (Big Data) ,pages 930–937. IEEE.He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778.Hutter, F., Kotthoff, L., and Vanschoren, J. (2019). Automated Machine Learning: Methods, Systems,Challenges . Springer.Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., and Leyton-Brown, K. (2017). Auto-WEKA 2.0:Automatic Model Selection and Hyperparameter Optimization in WEKA. The Journal of MachineLearning Research , 18(1).LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable Automatic Machine Learning. 7th ICMLWorkshop on Automated Machine Learning (AutoML) .Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Ruhkopf, T.,Sass, R., and Hutter, F. (2022). Smac3: A versatile bayesian optimization package for hyperpa-rameter optimization. Journal of Machine Learning Research , 23(54):1–9.Marvin (2020). Marvin Website. https://datadrivendiscovery.org/marvin .Olson, R. S. and Moore, J. H. (2016). TPOT: A Tree-based Pipeline Optimization Tool for AutomatingMachine Learning. In ICML AutoML Workshop , pages 66–74.Ono, J. P., Castelo, S., López, R., Bertini, E., Freire, J., and Silva, C. T. (2021). PipelineProfiler: AVisual Analytics Tool for the Exploration of AutoML Pipelines. IEEE Transactions on Visualizationand Computer Graphics , 27:390–400.Raleigh, C., Linke, A., Hegre, H., and Karlsen, J. (2010). Introducing ACLED: An Armed ConflictLocation and Event Dataset: Special Data Feature. Journal of peace research , 47(5):651–660.Santos, A., Castelo, S., Felix, C., Ono, J. P., Yu, B., Hong, S. R., Silva, C. T., Bertini, E., and Freire,J. (2019). Visus: An Interactive System for Automatic Machine Learning Model Building andCuration. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) , pages1–7. Association for Computing Machinery.Sheskin, D. J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures . crc Press.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,Kumaran, D., Graepel, T., et al. (2017). Mastering Chess and Shogi by Self-Play with a GeneralReinforcement Learning Algorithm. Conference on Neural Information Processing Systems .Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M. M. A., Prabhat,P., and Adams, R. P. (2015). Scalable Bayesian Optimization Using Deep Neural Networks. InProceedings of the ICML , pages 2171–2180.Trabelsi, M., Chen, Z., Zhang, S., Davison, B. D., and Heflin, J. (2022). StruBERT: Structure-awareBERT for Table Search and Matching. arXiv preprint arXiv:2203.14278 .12Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2021). BayesianOptimization is Superior to Random Search for Machine Learning Hyperparameter Tuning:Analysis of the Black-Box Optimization Challenge 2020. CoRR , abs/2104.10201.Wilson, G. T. (2016). Time Series Analysis: Forecasting and Control, 5th Edition. Journal of TimeSeries Analysis , 37(5):709–711.Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015). Learning Hyperparameter OptimizationInitializations. In 2015 IEEE international conference on data science and advanced analytics(DSAA) , pages 1–10. IEEE.13A Broader Impact StatementAlphaD3M can potentially strengthen the efforts in democratizing data science by broadening theapplication of automated predictive pipelines. Subject experts can create their own pipelines andexplore them in the context of an ethical framework. Its interoperable software infrastructureenables external auditing and improves the trust and interpretability of synthesized pipelines.The search space management mechanism also allows efficient resource allocation and helps toprototype pipelines before performing high energy-consuming model training.B Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See it mainly in Section 3 and 4.(b)Did you describe the limitations of your work? [Yes] See Section 5. We also discuss theinfeasibility of AutoML system in general, and our efforts to mitigate limitations.(c)Did you discuss any potential negative societal impacts of your work? [No] However, weadvocate for the necessity of human-in-the-loop to build trust in the generated pipelines.(d)Have you read the ethics review guidelines and ensured that your paper conforms to them?https://automl.cc/ethics-accessibility/ [Yes] Our paper follows these guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We are not includingtheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We are not includingtheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We provide a link to our public GitLab repository and documentationwebpage, where users can find information about the installation and instructions to runour system. The reported evaluation was conducted by a third (independent) party in acompetition among AutoML systems, so we can not release that code.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] See the scripts/paper_automlconference folder in our repository.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Seethescripts/paper_automlconference folder in our repository.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] Our code is well documented and follows codingstandards and best practices. We provide different Jupyter notebook examples and an APIto show how to use AlphaD3M.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [No] We do not specify allthe details.14However, some details, like the data split and search spaces are publicly available in thereferences.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] See Section 4.1.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 4.1.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Wepresented two comparisons (see Section 4). For the first comparison, we used the sameprotocol. For the second one, we used an existing asset and we evaluated our system usingthe same time protocol.(i)Did you compare performance over time? [No] We ran the systems during one hour, atime-bound used by others works (Erickson et al., 2020; Feurer et al., 2021), and reportedthe best score during this time.(j)Did you perform multiple runs of your experiments and report random seeds? [N/A] Wedo not perform multiple runs of our experiments.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not report error bars.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We did notuse surrogate benchmarks.(m)Did you include the total amount of compute and the type of resources used (e.g., typeofgpus, internal cluster, or cloud provider)? [No] Some of the reported evaluations wereconducted by a third party.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] The hyperparameters were automaticallytuned by our AutoML engine.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4.1.(b)Did you mention the license of the assets? [No] However, all assets are publicly availableand the licenses can be retrieved from the references.(c)Did you include any new assets either in the supplemental material or as a url? [Yes] Weincluded a urlto the data used in the experiments.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The assets used in this paper are publicly available.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The data used do not contain personally identifiableinformation neither offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not carry out a user study.15(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not carry out a user study.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not carry out a user study.C Additional DetailsC.1 AlgorithmsAlgorithm 1 describes the process of building the grammar. getVectorTK andgetVectorST repre-sent the BOW and one-hot encoding functions, respectively. The best values empirically calculatedfor the thresholds tsimandtperfare 0.8 and 0.5, respectively.Algorithm 1 Grammar BuilderInput: Marvin datasets D, query dataset q, thresholdtInitializeS=[]// Similar datasetsfordiinDdosimTK =cosineSimilarity(getVectorTK(di),getVectorTK(q))ifsimTK >tsimthensimST =cosineSimilarity(getVectorST(di),getVectorST(q))ifsimST >tsimthenAddditoSInitializeP=calculateADTM(S)InitializeR=[]// Production RulesforpiinPdoifperformance(pi)>tperfthenri=convertToPattern(pi))AddritoRreturnRAlgorithm 2 describes the process of calculating the primitive importance values in detail. Forinstance, the primitive importance values calculated for XGBoost and Random Forrest are 0.62 and0.56, whereas for Nearest Centroid and K-Nearest Neighbors the values are 0.46 and 0.44. It showsthat the importance values can be used as an indicator to prioritize the usage of primitives.Algorithm 2 Primitives ImportanceInput: PipelinesP, PatternsTInitializeR=getPrimitives(P)InitializeG,L=[]// Global and Local correlationsforriinRdopc=PearsonCorrelation (ri,P)npc=normalize(pc)AddnpctoGfortiinTdopi=getPipelines(ti,P)R=getPrimitives(ti,pi)forriinRdopc=PearsonCorrelation (ri,R)npc=normalize(pc)AddnpctoLreturn(G,L)16C.2 GrammarsDifferent tasks require different grammars. For instance, the algorithms needed to solve time-series and semi-supervised classification problems have a different structure and use a differentset of primitives. Consequently, specialized grammars and production rules are needed for eachtask. Manually creating these grammars is time-consuming and error-prone, and relying on thesegrammars can limit the effectiveness of the AutoML systems with respect to problem coverage andquality of the derived pipelines.Figure 5 shows an excerpt of a grammar automatically generated in AlphaD3M to solve classi-fication problems. The start symbol ( S) is the starting point from which all the production rulescan be derived. In the grammar, the terminal ‘primitive’ can be any of the available algorithms inAlphaD3M, and ‘E’represents the empty symbol.S ::= CATEGORICAL_ENCODER TEXT_FEATURIZER DATA_CONVERSION IMPUTATION CLASSIFICATIONS ::= TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER DIMENSIONALITY_REDUCTION CLASSIFICATIONS ::= DATA_STRUCTURE_ALIGNMENT IMPUTATION CLASSIFICATIONS ::= IMPUTATION FEATURE_SCALING CLASSIFICATIONS ::= IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION DIMENSIONALITY_REDUCTION CLASSIFICATIONIMPUTATION ::= 'primitive '|'E'CATEGORICAL_ENCODER ::= 'primitive '|'E'FEATURE_SCALING ::= 'primitive '|'E'FEATURE_SELECTION ::= 'primitive '|'E'DIMENSIONALITY_REDUCTION ::= 'primitive '|'E'DATA_CONVERSION ::= 'primitive 'TEXT_FEATURIZER ::= 'primitive 'DATA_STRUCTURE_ALIGNMENT ::= 'primitive 'CLASSIFICATION ::= 'primitive 'Figure 5: Excerpt of a grammar automatically generated by AlphaD3M for classification tasksIn Figure 6, you can see the manual grammar used in the experiments. This grammar wasproposed by Drori et al. (2019). To generate this grammar for classification and regression tabulartasks, a developer was asked to review manually the primitives to group them into categories. Forinstance, the primitives decision _tree.SKlearn andrandom _forest.SKlearn were grouped into thecategory ‘CLASSIFICATION’. Then, using his knowledge in ML, he created the production rules ofthe grammar using these categories.S ::= CLASSIFICATION_TASK | REGRESSION_TASKCLASSIFICATION_TASK ::= CLASSIFICATION | DATA_CLEANING CLASSIFICATION | DATA_TRANSFORMATION CLASSIFICATION |DATA_CLEANING DATA_TRANSFORMATION CLASSIFICATIONREGRESSION_TASK ::= REGRESSION | DATA_CLEANING REGRESSION | DATA_TRANSFORMATION REGRESSION |DATA_CLEANING DATA_TRANSFORMATION REGRESSIONCLASSIFICATION ::= 'primitive 'REGRESSION ::= 'primitive 'DATA_CLEANING ::= 'primitive 'DATA_CLEANING | 'E'DATA_TRANSFORMATION ::= 'primitive 'DATA_TRANSFORMATION | 'E'Figure 6: Manual GrammarC.3 ExperimentsIn Table 4, we can see the scores obtained by all AutoML systems developed in the D3M program,including a majority voting ensemble system, on a collection of 112 datasets2. This collection17contains challenging datasets that go beyond the simple tabular data and cover a wide variety oftasks and data types.Table 4: Scores obtained by AlphaD3M and the other AutoML systems developed in the D3M program.Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori124_120_mnist_8747 0.98 0.94 0.46 0.18 0.94 0.11 - -124_138_cifar100_1858 0.67 0.48 0.42 0.12 0.48 0.01 - -124_16_fashion_mnist 0.90 0.83 0.84 0.12 0.85 0.10 - -124_174_cifar10_MIN 0.88 0.82 0.84 0.27 0.80 0.10 - -124_188_usps_MIN 0.96 0.95 0.94 0.26 0.92 0.18 0.11 -124_214_coil20_MIN 0.99 0.99 0.99 0.85 0.97 - - -124_95_uc_merced_land_use_MIN 0.90 - 0.72 0.52 - 0.05 0.33 -1491_one_hundred_plants_margin_MIN 0.80 0.79 0.88 0.92 0.75 0.83 0.81 0.831567_poker_hand_MIN 0.90 0.84 0.28 0.48 0.12 0.13 - 0.27185_baseball_MIN 0.66 0.70 0.65 0.68 0.68 0.67 0.66 0.64196_autoMpg_MIN 6.57 9.12 5.74 11.95 7.49 6.01 15.36 7.0322_handgeometry_MIN 0.24 0.23 0.23 0.14 0.80 0.36 0.36 -26_radon_seed_MIN 0.02 0.02 0.24 0.03 0.02 0.06 1.40 0.0227_wordLevels_MIN 0.32 0.28 0.28 0.32 0.29 0.27 0.26 0.27299_libras_move_MIN 0.98 - - 0.48 - - 0.98 0.9730_personae_MIN 0.62 0.65 0.65 0.62 0.61 0.55 0.61 -313_spectrometer_MIN 0.43 0.37 0.37 0.30 0.32 0.33 0.23 0.4031_urbansound_MIN 0.93 0.93 0.91 0.75 0.92 0.77 0.49 -32_fma_MIN 0.55 0.57 0.34 0.28 - 0.11 0.11 -32_wikiqa_MIN 0.00 0.02 0.14 0.13 0.50 - 0.13 -38_sick_MIN 1.00 1.00 - 1.00 - - 0.49 1.004550_MiceProtein_MIN 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.0049_facebook_MIN 0.88 0.87 0.87 0.87 0.87 0.88 0.44 -534_cps_85_wages_MIN 20.11 20.35 22.07 23.15 24.86 21.44 - 20.7056_sunspots_MIN 34.55 11.82 8.64 8.45 58.30 9.40 90.60 -56_sunspots_monthly_MIN 64.61 41.18 46.86 41.04 - 62.20 27.74 -57_hypothyroid_MIN 0.96 0.98 0.99 0.98 0.74 0.99 0.97 0.9859_LP_karate_MIN 0.93 0.45 0.83 0.83 0.45 0.45 0.93 -59_umls_MIN 0.92 0.94 0.94 0.94 0.94 0.70 0.73 -60_jester_MIN 4.25 - 4.24 4.15 - 4.51 - -66_chlorineConcentration_MIN 0.82 0.86 0.81 0.52 0.78 0.68 0.23 -6_70_com_amazon_MIN 0.85 0.85 - 0.85 0.85 - - -6_86_com_DBLP_MIN 0.72 0.72 - 0.72 0.72 - - -JIDO_SOHR_Articles_1061 0.98 0.94 0.94 0.81 0.56 0.60 0.64 -JIDO_SOHR_Tab_Articles_8569 1.00 0.99 1.00 1.00 0.56 1.00 1.00 -LL0_1100_popularkids_MIN 0.42 0.45 0.38 0.38 0.40 0.44 - 0.47LL0_186_braziltourism_MIN 0.14 0.35 0.36 0.17 0.24 0.20 0.34 0.16LL0_207_autoPrice_MIN 4.89·1065.76·1066.04·1063.76·1075.36·1065.43·1061.56·1085.81·106LL0_acled_reduced_MIN 0.83 0.88 0.89 0.84 0.91 0.85 0.74 0.91LL0_jido_reduced_MIN 0.90 0.89 0.91 0.90 0.90 0.90 - 0.90LL1_2734_CLIR 0.88 0.50 0.52 0.88 - - 0.50 -LL1_336_MS_Geolife_transport_MIN 0.60 1.00 0.99 - 0.85 - 0.98 -LL1_336_MS_Geolife_transport_separate 0.67 1.00 0.99 - 0.86 - 0.99 -LL1_3476_HMDB_actio_recognition_MIN 0.11 1.00 0.90 0.11 - 0.48 0.08 -LL1_50words_MIN 0.35 0.55 0.56 0.41 0.51 0.45 0.35 -LL1_726_TIDY_GPS_carpool 0.54 0.58 0.58 0.46 0.59 - 0.63 -LL1_736_population_spawn_MIN 1636.12 1806.40 1804.76 1644.26 - 2845.89 - -LL1_736_population_spawn_simpler_MIN 1346.10 1490.15 3669.54 1347.65 1323.72 1550.40 19887.20 -LL1_736_stock_market_MIN 7.64 1.49 8.69 1.75 - 30.66 - -LL1_ACLED_TOR_online_behavior_MIN 0.40 0.05 0.44 0.64 0.43 0.66 0.08 0.40LL1_Adiac_MIN 0.75 0.70 0.73 0.54 0.67 0.70 0.49 -LL1_ArrowHead_MIN 0.75 0.82 0.78 0.72 0.65 0.55 0.72 -LL1_CONFLICT_3457_atrocity 9.53 6.75 11.43 12.84 - 17.21 13.91 -LL1_Cricket_Y_MIN 0.52 0.54 0.59 0.52 0.62 0.53 0.45 -LL1_DIC28_net_MIN 0.84 0.80 0.80 0.80 0.80 0.84 - -LL1_ECG200_MIN 0.90 0.87 0.87 0.86 0.91 0.85 0.86 -LL1_EDGELIST_net_nomination_MIN 0.99 0.66 0.85 0.94 0.66 0.35 0.84 -LL1_ElectricDevices_MIN 0.54 0.42 0.46 0.06 0.44 0.27 0.31 -LL1_FISH_MIN 0.80 0.87 0.89 0.73 0.84 0.86 0.78 -LL1_FaceFour_MIN 0.84 0.83 0.71 0.55 0.65 0.40 0.66 -18(Table 4: Continued from the previous page)Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriLL1_GS_process_classification_tab_MIN 0.80 0.80 0.80 0.80 0.80 0.73 - 0.81LL1_GS_process_classification_text_MIN 0.65 0.80 0.65 0.80 0.80 0.76 0.80 -LL1_GT_actor_group_association_MIN 0.25 0.13 0.17 0.13 - - - -LL1_HandOutlines_MIN 0.89 0.91 0.90 0.88 0.88 0.88 0.88 -LL1_Haptics_MIN 0.43 0.42 0.44 0.42 0.41 0.45 0.42 -LL1_ItalyPowerDemand_MIN 0.93 0.95 0.95 0.95 0.95 0.91 0.90 -LL1_MIL_MUSK 0.68 0.77 0.83 0.67 0.80 0.80 - 0.72LL1_MIL_Mutagenesis 0.80 0.73 0.72 0.71 0.70 0.63 - 0.79LL1_MITLL_synthetic_vora_E_2538 0.29 0.53 0.52 0.50 0.31 0.44 - 0.38LL1_Meat_MIN 0.95 0.94 0.88 0.92 0.88 0.17 0.95 -LL1_OSULeaf_MIN 0.53 0.44 0.52 0.77 0.45 0.47 0.32 -LL1_PHEM_Monthly_Malnutrition_MIN 10.63 9.56 9.39 9.73 - 12.18 - -LL1_PHEM_weekly_malnutrition_MIN 3.34 4.32 3.45 2.94 - 4.23 4.18 -LL1_TXT_CLS_3746_newsgroup_MIN 0.60 0.46 0.55 0.48 0.60 0.45 0.23 -LL1_TXT_CLS_SST_Binary 0.73 0.82 0.82 0.55 - 0.51 0.53 -LL1_TXT_CLS_airline_opinion_MIN 0.81 0.80 0.81 0.80 0.81 0.72 0.72 -LL1_TXT_CLS_apple_products_sent_MIN 0.73 0.71 0.72 0.72 0.73 0.66 0.69 -LL1_VID_UCF11_MIN 0.99 0.99 0.25 0.27 - 0.02 0.08 -LL1_VTXC_1343_cora_MIN 0.61 0.04 0.22 0.17 0.04 0.13 0.52 -LL1_VTXC_1369_synthetic_MIN 0.95 0.22 0.33 0.21 0.22 0.19 0.48 -LL1_ViEWS_CM_S1 0.69 1.20 0.90 0.72 0.75 2.52 - 0.82LL1_ViEWS_PGM_S1 0.02 0.04 0.02 - 0.02 0.02 0.30 0.02LL1_bigearth_landuse_detection 0.90 0.96 0.76 0.65 0.21 - - -LL1_bn_fly_drosophila_medulla_net_MIN 0.24 0.24 - - - 0.19 - -LL1_h1b_visa_apps_7480 0.44 0.47 0.43 0.44 0.41 0.41 0.47 0.42LL1_net_nomination_seed_MIN 0.99 0.99 0.96 0.94 0.99 0.34 0.46 -LL1_penn_fudan_pedestrian_MIN 0.94 0.94 - 0.94 0.94 - - -LL1_retail_sales_total_MIN 1989.19 1921.54 1941.06 1966.30 1992.17 - 1971.76 2022.41LL1_terra_canopy_height_s4_100_MIN 113.04 68.44 39.02 52.21 - 79.86 343.27 -LL1_terra_canopy_height_s4_70_MIN 104.92 547.94 126.06 136.32 - 169.63 136.98 -LL1_terra_canopy_height_s4_80_MIN 112.95 92.95 32.57 74.59 - 111.49 74.54 -LL1_terra_canopy_height_s4_90_MIN 117.13 85.73 35.12 60.44 - 104.49 60.45 -LL1_terra_leaf_angle_mean_s4_MIN 0.04 0.09 0.05 0.04 - - 0.05 -LL1_tidy_terra_panicle_detection_MIN 0.01 0.03 - - - - - -SEMI_1040_sylva_prior_MIN 0.93 0.90 0.93 - 0.92 - - -SEMI_1044_eye_movements_MIN 0.52 0.57 0.61 0.55 0.60 0.53 0.54 -SEMI_1053_jm1_MIN 0.26 1.00 0.16 - 0.16 0.41 - -SEMI_1217_click_prediction_small_MIN 0.04 0.03 0.04 - 0.17 - - -SEMI_1459_artificial_characters_MIN 0.68 0.99 0.83 0.99 0.67 0.61 0.52 -SEMI_155_pokerhand_MIN 0.58 0.66 0.60 0.05 0.64 0.50 0.51 -kaggle_music_hackathon_MIN 21.88 17.56 19.64 24.24 21.79 - - 21.85loan_status_MIN 0.40 0.50 0.51 0.44 0.33 - 0.48 0.46political_instability_MIN 0.81 0.89 0.89 0.89 0.89 - 0.88 -uu1_datasmash_MIN 1.00 1.00 1.00 1.00 0.61 1.00 1.00 -uu2_gp_hyperparameter_estimation_MIN 0.89 0.88 0.57 0.89 - - - 0.89uu3_world_development_indicators_MIN 2.39·10105.54·10124.12·1012-4.40·1012- - -uu3_world_development_indicators_raw 7.83·10131.04·10125.22·1011- - - - -uu4_SPECT_MIN 0.00 0.92 0.92 0.90 0.89 0.90 0.78 -uu5_heartstatlog_MIN 0.70 0.69 0.72 0.62 0.61 0.72 0.67 -uu6_hepatitis_MIN 0.00 0.47 0.89 0.40 0.27 0.31 0.44 -uu7_pima_diabetes_MIN 0.59 0.57 0.60 0.57 0.60 0.63 0.57 -uu_101_object_categories_MIN 0.95 0.89 0.84 0.34 - 0.10 - -19The average rank values obtained by different AutoML systems for each task type in the D3Mdatasets can be seen in Table 5. These datasets contain a total of 17 unique ML tasks.Table 5: Average rank values by task obtained by different AutoML systems.Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriImage Classification 1.11 2.78 2.78 4.56 4.33 6.22 7.44 8.00Tabular Classification 3.75 3.30 3.35 3.85 4.85 4.65 5.85 3.55Tabular Regression 2.27 3.18 3.00 5.73 4.27 5.73 7.54 4.36Image Regression 4.00 2.00 2.00 1.00 7.00 5.00 5.00 8.00Text Classification 2.56 3.33 2.22 3.00 3.56 5.78 4.33 8.00Audio Classification 1.50 1.00 3.50 5.00 5.50 5.00 6.00 8.00Graph Matching 1.00 3.33 3.00 2.33 4.67 3.33 6.33 8.00Time series Forecasting 3.38 3.62 2.62 2.23 7.31 5.08 5.08 8.00Link Prediction 3.33 2.33 2.33 1.67 4.67 6.67 5.00 8.00Collaborative Filtering 3.00 8.00 2.00 1.00 8.00 4.00 8.00 8.00Time series Classification 3.26 2.26 2.16 4.68 3.79 5.32 4.53 8.00Community Detection 1.00 1.00 8.00 3.33 3.33 6.33 8.00 8.00Video Classification 2.50 1.00 3.00 3.50 8.00 4.50 5.50 8.00Vertex Classification 1.00 4.00 3.25 4.25 4.00 6.50 3.50 8.00Object Detection 1.50 1.00 8.00 4.50 4.50 8.00 8.00 8.00Semisupervised Classification 3.50 2.33 2.33 6.00 2.83 6.00 6.83 8.00LUPI 5.25 3.00 1.25 4.50 5.00 2.50 4.75 8.0020Table 6 and Table 7 show the raw and normalized scores (normalized by the best score) obtainedby each system on the 39 datasets of the OpenML AutoML Benchmark (Gijsbers et al., 2019).This benchmark represents real-world data science problems and covers binary and multiclassclassification tasks. Additionally, Table 6 shows the gain of AlphaD3M regarding the other systems.Table 6: Raw scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3M Gaintask_10101 0.76 0.76 0.76 0.76 0.76 0.79 0.03task_12 0.98 0.98 0.98 0.98 - 0.96 -0.01task_146195 0.88 0.71 0.86 0.88 0.85 0.81 -0.03task_146212 1.00 1.00 1.00 1.00 1.00 1.00 0.00task_146606 0.74 0.60 0.73 0.72 - 0.73 0.03task_146818 0.91 0.86 0.84 0.90 0.87 0.87 -0.01task_146821 0.99 1.00 1.00 1.00 1.00 0.97 -0.03task_146822 0.97 0.97 0.97 0.97 0.98 0.97 0.00task_146825 0.91 - 0.91 0.90 - 0.86 -0.05task_14965 0.91 0.88 0.91 0.91 0.91 0.91 0.00task_167119 0.92 0.80 0.94 0.96 0.90 0.83 -0.08task_167120 0.51 0.51 0.51 0.51 - 0.51 -0.00task_168329 0.40 0.27 0.38 0.35 0.35 0.37 0.02task_168330 0.73 0.65 0.73 0.73 0.70 0.72 0.01task_168331 0.73 0.62 0.73 0.69 0.66 0.66 -0.02task_168332 0.56 - 0.54 0.51 0.44 0.41 -0.10task_168335 0.94 - 0.94 - 0.93 0.94 -0.00task_168337 0.84 - 0.86 0.83 0.77 0.61 -0.21task_168338 1.00 - 1.00 1.00 0.99 0.97 -0.03task_168868 0.99 0.99 0.99 1.00 0.99 0.99 0.00task_168908 0.74 0.73 0.76 0.72 - 0.77 0.03task_168909 0.99 0.96 0.99 0.98 - 0.99 0.01task_168910 0.72 0.60 0.72 0.72 0.71 0.65 -0.04task_168911 0.81 0.82 0.82 0.82 0.81 0.81 -0.01task_168912 0.93 0.92 0.95 0.95 0.95 0.94 -0.00task_189354 0.67 - 0.67 0.61 0.67 0.65 -0.01task_189355 0.94 - 0.00 - - 0.88 0.41task_189356 0.71 - 0.69 - - - -task_3 0.99 0.93 0.99 1.00 0.99 0.99 0.01task_31 0.77 0.66 0.82 - 0.82 0.77 0.00task_34539 0.95 - 0.95 0.95 0.95 0.95 -0.01task_3917 0.87 - 0.86 - 0.88 0.86 -0.01task_3945 0.98 - 0.98 0.98 0.98 0.98 0.00task_53 0.86 0.67 0.85 0.88 - 0.82 0.01task_7592 0.87 0.87 0.87 0.86 0.87 0.87 0.00task_7593 0.97 0.66 0.96 0.80 - 0.95 0.10task_9952 0.88 0.91 0.90 0.90 0.91 0.91 0.01task_9977 0.98 0.95 0.97 0.98 0.97 0.96 -0.00task_9981 0.94 0.86 0.96 0.94 0.96 0.94 0.0121Table 7: Normalized scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3Mtask_10101 0.97 0.97 0.97 0.97 0.97 1.00task_12 0.99 1.00 0.99 0.99 - 0.98task_146195 1.00 0.81 0.98 1.00 0.97 0.92task_146212 1.00 1.00 1.00 1.00 1.00 1.00task_146606 1.00 0.82 1.00 0.98 - 0.99task_146818 1.00 0.94 0.92 0.98 0.95 0.95task_146821 0.99 1.00 1.00 1.00 1.00 0.97task_146822 1.00 0.99 1.00 1.00 1.00 1.00task_146825 1.00 - 0.99 0.99 - 0.94task_14965 1.00 0.96 1.00 1.00 1.00 1.00task_167119 0.96 0.83 0.98 1.00 0.94 0.86task_167120 1.00 1.00 1.00 0.99 - 0.99task_168329 1.00 0.69 0.96 0.88 0.89 0.94task_168330 1.00 0.89 1.00 1.00 0.97 0.98task_168331 1.00 0.84 1.00 0.95 0.90 0.91task_168332 1.00 - 0.98 0.93 0.80 0.75task_168335 1.00 - 1.00 - 0.99 0.99task_168337 0.98 - 1.00 0.97 0.89 0.71task_168338 1.00 - 1.00 1.00 0.99 0.97task_168868 1.00 0.99 1.00 1.00 1.00 1.00task_168908 0.97 0.96 0.99 0.94 - 1.00task_168909 1.00 0.97 1.00 0.99 - 1.00task_168910 1.00 0.83 1.00 1.00 0.98 0.90task_168911 0.99 1.00 1.00 1.00 0.99 0.98task_168912 0.98 0.97 0.99 1.00 1.00 0.98task_189354 1.00 - 1.00 0.91 1.00 0.96task_189355 1.00 - 0.00 - - 0.94task_189356 1.00 - 0.97 - - -task_3 1.00 0.94 1.00 1.00 1.00 1.00task_31 0.94 0.80 1.00 - 1.00 0.94task_34539 1.00 - 1.00 1.00 0.99 0.99task_3917 0.99 - 0.98 - 1.00 0.98task_3945 1.00 - 1.00 0.99 1.00 1.00task_53 0.97 0.76 0.96 1.00 - 0.93task_7592 1.00 0.99 1.00 0.99 1.00 1.00task_7593 1.00 0.68 0.99 0.82 - 0.97task_9952 0.96 0.99 0.98 0.98 1.00 0.99task_9977 1.00 0.97 1.00 1.00 1.00 0.99task_9981 0.98 0.89 1.00 0.98 1.00 0.9822
RPkHTZV8Ntx
Q3DWpGoX7PD
automl.cc/AutoML/2023/ABCD_Track
2023
forester: A Novel Approach to Accessible and Interpretable AutoML for Tree-Based Modeling
["Anna Kozak", "Hubert Ruczy\u0144ski"]
The majority of AutoML solutions are developed in Python. However, a large percentage of data scientists are associated with the R language. Unfortunately, there are limited R solutions available with high entry level which means they are not accessible to everyone. To fill this gap, we present the $\textit{forester}$ package, which offers ease of use regardless of the user's proficiency in the area of machine learning. The $\textit{forester}$ package is an open-source AutoML package implemented in R designed for training high-quality tree-based models on tabular data. It supports regression and binary classification tasks. A single line of code allows the use of unprocessed datasets, informs about potential issues concerning them, and handles feature engineering automatically. Moreover, hyperparameter tuning is performed by Bayesian optimization, which provides high-quality outcomes. The results are later served as a ranked list of models. Finally, the $\textit{forester}$ package offers a vast training report, including the ranked list, a comparison of trained models, and explanations for the best one.
["machine learning", "automated machine learning", "tree-based models", "automated reporting"]
forester: A Novel Approach to Accessible and InterpretableAutoML for Tree-Based ModelingAnna Kozak1Hubert Ruczyński11Warsaw University of TechnologyAbstract The majority of AutoML solutions are developed in Python. However, a large percentageof data scientists are associated with the R language. Unfortunately, there are limitedR solutions available with high entry level which means they are not accessible to everyone.To fill this gap, we present the forester package, which offers ease of use regardless of theuser’s proficiency in the area of machine learning.The forester package is an open-source AutoML package implemented in R designed fortraining high-quality tree-based models on tabular data. It supports regression and binaryclassification tasks. A single line of code allows the use of unprocessed datasets, informsabout potential issues concerning them, and handles feature engineering automatically.Moreover, hyperparameter tuning is performed by Bayesian optimization, which provideshigh-quality outcomes. The results are later served as a ranked list of models. Finally, theforester package offers a vast training report, including the ranked list, a comparison oftrained models, and explanations for the best one.1 IntroductionMachine learning is being used more and more in the world around us. Every day, models arecreated to assist doctors (Shimizu and Nakayama, 2020), financiers (Jorge et al., 2022), or tourists(Fararni et al., 2021). With the increasing demand for model building, research is being conductedon automatically developing tools to build artificial intelligence based solutions.Many types of models are used in machine learning, such as decision rules (scoring card model) tocomplex neural network structures modeling natural language (large language models, for example,ChatGPT (Bavarian et al., 2022)). Viewing machine learning in terms of tabular data, we havea wide range of models available, from decision trees and linear or logistic regression to randomforests, SVM, or neural networks. However, tree-based models are the most widely used; the mainreason behind this is their high predictive efficiency. A simple decision tree model gives relativelysatisfactory results, but using multiple trees to create a random forest allows significantly higherpredictive power (Caruana et al., 2008; Grinsztajn et al., 2022).Automating the process to build machine learning models can include many different components.For example, the CRoss Industry Standard Process for Data Mining (CRISP-DM) (Wirth and Hipp,2000) is the most common methodology for data mining, analytics, and data science projects. Butthe basic framework of an automatic machine learning system is the preparation of models basedon data entered by the user. This process can be extended in various directions; for example,a preliminary analysis of the given data can be taken care of to look for potential data errorsor outlier observations, i.e. exploratory data analysis. Another essential element may be thesearch space of the model’s hyperparameters. Optimization of hyperparameters can be based onsimple methods such as a predefined parameter grid or random search. Another way to selecthyperparameters is to use Bayesian optimization (Snoek et al., 2012) or meta-learning (Vilalta et al.,2004; Vanschoren, 2019; Woźnica and Biecek, 2022). After tuning the models with hyperparameteroptimization, the next step we can add is to analyze the results in the form of a leaderboardAutoML 2023 Workshop Track ©2023 the authors, released under CC BY 4.0or visualization. By extending with explanatory methods (Biecek and Burzykowski, 2021) andreporting, the entire machine learning process can be finalized.Automating the process of machine learning allows access to data science tools for people who arestarting in data analysis and modeling. At the same time, it is an improvement and speeds up thework of experienced data scientists, who can make at least baseline models using a single line ofcode.In this paper, we present the AutoML package written for the R (R Core Team, 2022) to createmodels for regression and binary classification tasks on tabular data. The main goals of the packageare: making the package easy to use, fully automating all the necessary steps inside the ML pipeline,and providing results that are easy to create, understand and allow diagnostics of the models.The availability of responsible machine learning methods in the solution allows the results ofcomplex models to be interpreted. Changing the focus from obtaining the best possible outcomesto the interpretability of the results is a novelty for the AutoML tools. The implementation of theforester package can be found in our GitHub repository1. The software is open source and containscomprehensive documentation with examples of use.2 Related worksPackages for AutoML are prevalent in Python. The first AutoML solutions like Auto-WEKA(Thornton et al., 2013), was followed by Auto-Sklearn (Feurer et al., 2015, 2022) and TPOT (Tree-Based Pipeline Optimization Tool) (Olson et al., 2016) which was one of the very first AutoMLmethods and open-source software packages developed for the data science community in Python.But in R, there are few approaches. One of them is the H2O package (LeDell et al., 2022). It isan open-source library that is an in-memory, distributed, fast, and scalable machine learningand predictive analytics platform that creates a ranked list of models easily exported for use ina production environment. The authors have created an easy-to-use interface that automates thetraining of multiple candidate models. H2O’s AutoML is also designed for more advanced users byproviding a simple wrapper function that performs many modeling tasks. H2O’s AutoML processautomatically trains models and tunes them at user-specified times. To better understand the qualityof models in H2O, we can rely on metrics such as R2and mean square error (MSE). For comparison,in the forester package, we can compare models using the most commonly used metrics or evendefine a new custom metric. What particularly distinguishes the forester package from H2O isthe preprocessing. In the latter’s case, it only includes target encoding and is in the experimentalstage. In the forester package, we have more accurate and extensive preprocessing. In addition,H2O always requires Java to work, so the user must also install it.The second widely-used framework is the mlr3 package (Lang et al., 2019) which provides a frame-work for classification, regression, survival analysis, and other ML tasks such as cluster analysis.It provides the ability to perform hyperparameter tuning and feature selection. The package iswell-documented, contains many functions and models, and provides many capabilities. However,it is different from a typical package for AutoML, as creating models requires knowledge of how todo it and some time to assemble such a model. It also has its drawbacks, such as the need for morepreprocessing, which would help to use it more easily, for example, the XGBoost model, whichhas to have only numerical data without factors. There is also no way to divide the collection intotraining, testing, and validation subsets. The mlr3 package provides functionality that builds onthe basic components of machine learning. It can be extended to include preprocessing, pipelining,visualization, additional learners, additional task types, and more. To create these properties, weneed to install many other libraries. In the forester package, we provide these components at once,and with a single function, we can perform preprocessing, prepare visualization of the results1https://github.com/ModelOriented/forester2Model training and tuningData checkData preparationDecisionmakingforesterfeaturesModel evaluationMissing values,Correlated features, Irrelevant columnsData splitting,Preprocessing,Data imputationDefault parameters,Random search,Bayesian OptimizationRanked list,Customizable metricssave(),report(),explain()(1)(2)(3)(4)Raw dataFigure 1: A diagram presenting the forester pipeline. The forester analyses poor-quality data with thein-built data check (1), which points to possible issues, and later data preparation (2) handlesthem during the preprocessing. In the next step, the models are trained with default andrandom searched parameters and tuned with a Bayesian optimization algorithm (3). In theend, trained models are evaluated (4) and presented as a ranked list. In addition, the packageoffers the user additional features.and generate a report. A more detailed comparison of the forester package with H2O andmlr3 ispresented in Appendix F.3forester AutoMLTheforester is an AutoML package automating the machine learning pipeline, starting from the datapreparation, through model training, to the interpretability of the results. This way, we minimize theuser’s time performing basic and often repetitive activities related to the machine-learning process.Despite the high automation of the pipeline shown in Figure 1, we expose multiple parameterswhich advanced data scientists can use to customize the model creation. The whole package relieson the four pillars described in this section.1.Data checkThe first one, called data check, concerns a data preparation phase. Data preparation is a crucialpart of the modeling process (Rutkowski et al., 2010), so we cannot blindly assume a single wayof transforming the data for all cases. Appropriate data preprocessing is crucial to buildinga model with a small error rate. To face that issue, we introduce a data check report summarizingthe dataset with some basic information and pointing out possible problems. Data problems canaffect the following modeling stages and be relevant to any model. The data check report pointsout id-like, duplicated, static, or highly correlated columns. Moreover, it points out the outliers,missing values, and the imbalance of the target. This way we can propose some simple heuristicdata preprocessing methods, yet more advanced users are able to fight the issues mentioned bystudying the data check report on their own.32.Data preparationPreparing the data for modeling is another crucial aspect after checking the data. It can bedone using a dedicated tool, but the forester package offers two general-purpose preprocessingmethods, basic and advanced. The main purpose of this function is to remove the need toprepare data manually differently for different types of models. The basic preparation consistsof the actions that are necessary for the package to work that is: the removal of static columns,binarization of the target variable, and imputation of the missing data using the MICE algorithm(Buuren and Groothuis-Oudshoorn, 2011). The advanced method additionally includes theremoval of id-like columns (features suspected of being id), removal of highly correlated columns(Spearman’s rank for the numerical features, and Crammer’s V rank for categorical features) aswell as feature selection with the BORUTA algorithm (Kursa and Rudnicki, 2010). Additionally,every model in the forester package requires a different data format which is also prepared insidethe main function.3.Model training and tuningTheforester package’s third and most important pillar is model training and tuning. Our solutionfocuses on the tree-based model family because of their high-quality performance for varioustabular data tasks. We’ve limited ourselves to 5 well-known engines with different strong andweak points, so they complement each other.We have included the basic decision tree from partykit package (Hothorn and Zeileis, 2015)as an extremely light engine, but mostly, we have focused on the ensemble models. The onlybagging representative is the random forest from the ranger package (Wright and Ziegler, 2017),which is reluctant to overfit.We have also considered three different boosting algorithms. The XGBoost model (Chen andGuestrin, 2016) is highly effective, but due to the need for one hot encoding, it suffers from theabundance of categorical features. However, the LightGBM model (Ke et al., 2017), which worksbest for medium and large datasets, has problems with the small ones. The last engine is theCatBoost (Prokhorenkova et al., 2018) which can achieve superior performance but requires theJava environment installed, which is a minor inconvenience.The models are trained with three approaches: using the default parameters, performing therandom search algorithm within the predefined parameter space, and running an advancedBayesian Optimization algorithm for fine-grained tuning. The first method is the baselinefor other models. With the second one, we can cheaply create multiple models and explorevarious parameter combinations. The best and most time-consuming method is the BayesianOptimization from the ParBayesianOptimization package. However, it is extremely useful forcomplex tasks.4.Model evaluationThe last pillar is the automatic evaluation of the trained models. The forester package assessesevery trained model by various metrics, such as accuracy, area under the receiver operatingcharacteristic curve (AUC), and F1 for the binary classification tasks, and Root Mean SquaredError (RMSE), Mean Absolute Error (MAE), or R2for the regression tasks. The results are laterpresented as a ranked list sorted by the outcomes (for example, ascending order for RMSE, anddescending for AUC). Moreover, the user can define their metrics and provide them for theevaluation phase.4forester featuresOne of the most important goals for the forester package is the convenience of use and helping theusers to focus more on analyzing the results instead of writing the code. To obtain such a user-friendly environment, the forester offers plenty of additional features useful for data scientists.44.1 Model explanationsIn recent years, interpretable machine learning has become a significant trend in machine learning.The tools providing interpretability such as DALEX (Biecek, 2018) or iml(Molnar et al., 2020)allow data scientists to explain how the models they create work, making it easier to detecttheir misbehavior. Models’ explainability also enhances trust in such tools, even in demandingenvironments like medical researchers. To support using explainable methods for the modelstrained by the forester , we have created a wrapper for the DALEX explainer compatible with ourpackage. This way, the user can easily create various explanations for the trained models.4.2 Saving the outcomesAnother crucial feature is the save function, which lets the user save the training output. Returnedforester object contains lots of information, such as preprocessed dataset, split datasets, split indexes,ranked lists for training, testing, and validation datasets, the predictions of the model, and muchmore. The abundance of objects makes it incredibly important to save the outcomes after thetime-consuming training process.4.3 Automated reportLast but not least, our solution offers an automatically generated report that helps users quicklyand easily analyze the training results. The main goal of this feature is to ensure that every useris able to easily assess the quality of the trained models. The report consists of basic informationabout the dataset, a data check report, a ranked list of the best ten models, and visualizationsconcerning model quality. An example report for the blood-transfusion-service-center dataset (fromthe OpenML-CC18 benchmark (Bischl et al., 2021)) is provided in Appendix G.The plots are divided into two groups; the first one compares the outcomes of different models,which helps to decide which model is the best. For example, guided by the radar chart comparisonplot, we can choose the model with slightly worse accuracy, but better AUC and F1 values.The second type of plots concentrates on the model with the best performance, and its mostprominent feature is providing a feature importance plot. This visualization lets us understandwhich variables are the most important for the model; thus, we can evaluate its correctness.It is worth noticing that the reports, mostly visualizations, are different for binary classificationand regression tasks as we measure their performance differently.5 User interface5.1 Training functionThe forester ’s main train() function runs the entire AutoML pipeline, including the data prepa-ration, model training, and evaluation. To keep the package as simple as possible, the functionrequires only the dataset and target column name (Listing 1); however, to keep the tool versatile,there are lots of custom parameters for more advanced users (Listing 2). With the latter option, theuser can specify the amount of Bayesian optimization iterations, the number of random searchevaluations, proportions of the train, test, and validation subsets, change the preprocessing methodsor even add their evaluation metric.train _ output←train ( data = lisbon , y = 'Price ')Listing 1: Training models with the forester package and default parameters.5train _ output←train ( data = lisbon ,y = 'Price ',verbose = TRUE ,engine = c( 'ranger ','xgboost ','decision _tree ','lightgbm ','catboost '),train _ test _ split = c(0.6 , 0.2 , 0.2) ,bayes _ iter = 10,random _ evals = 3,advanced _ preprocessing = FALSE ,metrics = 'auto ',sort _by = 'auto ',metric _ function = NULL ,metric _ function _ name = NULL ,metric _ function _ decreasing = TRUE ,best _ model _ number = 5)Listing 2: Training models with the forester package and custom parameters.5.2 Extensive featuresApart from the train() function, the user can utilize additional functions, which is helpful duringthe modeling process. The check_data() function (Listing 3) enables printing a data check reportoutside of the train() function. The save() function (Listing 4) lets us save the outcome of thetraining process, whereas the report() function (Listing 5) creates a training report. The lastextension is the explain() function (Listing 6), which creates a DALEX explainer that can be usedto generate multiple visualizations concerning the model interpretability with the DALEX package.check _ data ( data = `blood - transfusion - service - center `, y = 'Class ')Listing 3: Generating a data check report.save ( train _ output , name = 'train _ output .RData ')Listing 4: Saving the train output.report ( train _ output , 'report .pdf ')Listing 5: Generating a report from the train output.exp←explain ( models = train _ output $ best _ models [[1]] ,test _ data = train _ output $data ,y = train _ output $y,verbose = FALSE )Listing 6: Creating a model explainer, that lets us use functions from the DALEX package.6 PerformanceTo evaluate the performance of the package, we’ve decided to compare it to the H2O framework onthe binary classification tasks from the OpenML-CC18 benchmark (Bischl et al., 2021) and regressiontasks from OpenML (Vanschoren et al., 2013). Due to the limited computational resources, we havechosen a subset of 8 datasets for classification and 7 for regression described in Table 1 and Table2, respectively. The binary classification datasets consisted mainly of categorical variables andcontained many missing values, a significant obstacle for both solutions, whereas the regressiontasks had no missing values and mostly numeric or binary values.6During the experiment, we trained the forester package three times for each dataset with randomseeds provided for the data splitting function inside the forester . The same splits were later usedfor the H2O framework. A singular training iteration was executed for the decision tree, randomforest, LightGBM, and CatBoost engines with ten iterations of the Bayesian optimization and tenrandom search evaluations. For the regression task we’ve additionally added an XGboost engine.To ensure that both frameworks had the same amount of time, we have measured it for every forestertraining iteration, and provided it to the respective H2O AutoML runs. This H2O functionalitydidn’t work as supposed, and finally this framework had two times longer training time on average.This factor definitely improved the H2Os results, and we have to bear that in mind during theoutcomes comparison. For further details see Appendix E. Additionally, to ensure the same datasplit, we have used the indexes saved during the forester training. The source codes are included inAppendix A.The comparison of performance for both frameworks is presented in Figure 2 and Figure 3. Forthe raw results, as well as aggregated tabular ones, see Appendix C. As one can see, for thebinary classification task, the forester outperformed the H2O framework on five datasets: banknote-authentication ,blood-transfusion-service-centre ,credit-approval ,credit-g , and diabetes . The outcomesfor very simple datasets kr-vs-kp andbreast-w were similar, and H2O obtained better performancefor the phoneme data. For the regression tasks, the results were comparable to the H2O’s for mosttasks or slightly worse, as for the poldataset. The results show that the forester creates high-qualitymodels that are competitive with the existing solutions.However, our conclusions cannot be too far-fetched since we tested the package for only a few setsfor binary classification and regression tasks. We cannot say that the forester package’s predictivepower is better than H2O, but they clearly are competitive.Table 1: A subset of OpenML-CC18 benchmark datasets used during the evaluation process of theforester package, which are tabular data objects presenting the binary classification tasks.The features are mostly categorical, and they contain lots of missing values.Name Number of columns Number of rowskr-vs-kp 37 3196breast-w 10 699credit-approval 16 690credit-g 21 1000diabetes 9 768phoneme 6 5404banknote-authentication 5 1372blood-transfusion-service-center 5 748Table 2: A subset of OpenML datasets used during the evaluation process of the forester package,which are tabular data objects presenting the regression tasks. In this case there were nomissing values, and the features were mostly numerical or binary.Name Number of columns Number of rowsbank32nh 33 8192wine_quality 12 6497Mercedes_Benz_Greener_Manufacturing 378 4209kin8nm 9 8192pol 49 150002dplanes 11 40768elevators 19 165997banknoteauthenticationbloodtransfusionservicecenterbreastwcreditapprovalcreditgdiabeteskr vs kpphoneme0.5 0.6 0.7 0.8 0.9 1.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainAccuracyDatasetFramework forester for the binary classification taskPerformance comparison of forester and H2OH2OFigure 2: Performance comparison for forester and H2O frameworks for the datasets described inTable 1. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot clearlyshows us that the forester performs better than the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.2dplanesbank32nhelevatorskin8nmMercedes_Benz_Greener_Manufacturingpolwine_quality0.0 2.5 5.0 7.5 10.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainRMSEDatasetfor the regression taskPerformance comparison of forester and H2OFramework forester H2OFigure 3: Performance comparison for forester and H2O frameworks for the datasets described inTable 2. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot showsus that the forester performs comparably to the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.7 Limitations and Broader Impact StatementThe forester package has limitations in the availability of models. The library contains only tree-based models, but this family proves to be extremely versatile. Only binary classification andregression are available in the current version of the package. Preparing models for multi-criteriaclassification, cluster analysis, or survival analysis is currently impossible. However, these featurescan be easily implemented in the future. The package currently performs better with smallerdatasets; a large allocation of memory and time is needed for large and complex data.8One of the strongest points of the forester package is being incredibly easy to use, even if we donot have broad machine learning expertise. This approach, however, raises the risk that the modelstrained with the package will be of poor quality, for example, due to the training on a low-qualitydataset, or that the outcomes will be misunderstood or incorrectly interpreted by the inexperienceduser. The reporting module addresses all of these responsible machine learning concerns, whichinforms about possible issues with the data, measures the quality of the models, and provides theirexplanations.8 ConclusionsThis paper presents an R package for AutoML, creating models for regression and binary classifica-tion tasks conducted on tabular data. Our solution addresses the needs we have observed in AutoMLtools in various programming languages. The main goals of the package are to keep the packagestable and easy to use, to automate all the necessary steps inside the ML pipeline, and to provideresults that are easy to create, understand and allow for diagnostics of the models. To achieve theseresults, we have focused only on the best representatives from the family of tree-based modelsthat show superiority over other methods on tabular data. Furthermore, we provide additionalfunctions that allow the user to save the models, create explanations and create a report describingthe learning process and explaining the developed models. Experiments carried out tentativelyindicate that more predictive power is obtained using our solution than currently existing solutionsin R.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] We introduced the forester package and described itspotential. The Section 3 and Section 4 describe the various features.(b) Did you describe the limitations of your work? [Yes] See Section 7.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 7.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] We believe thatour paper conforms to the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We have notheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We have no theoreticalresults.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an in-structive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] See Appendix A.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] The most important results analyzed in this paper are presented or mentioned(via a link) in the Appendix C.9(c)Did you include scripts and commands that can be used to generate the figures and tables inyour paper based on the raw results of the code, data, and instructions given? [Yes] The codeis available on the package’s GitHub repository in the form of R Markdown notebook, seeAppendix A.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] The code is available on the package’s GitHubrepository in the form of R Markdown notebook, see Appendix A.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces,fixed hyperparameter settings, and how they were chosen)? [Yes] The training details arementioned in the main paper Section 6, as well as in the source code described in AppendixA.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] The methods were compared on the same train, test,and validation subsets, and the hyperparameter search space was the default one for eachAutoML framework.(g)Did you run ablation studies to assess the impact of different components of your approach?[No] The package at this point is pretty straightforward and doesn’t contain many com-ponents that could alter the outcomes. A possible ablation study could be applied to theadvanced preprocessing method, however, we did not have enough computational powerfor running the benchmark again.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] The modelswere compared by the same metrics for classification: accuracy, AUC and F1 and forregression: RMSE, MSE, R2i MAE.(i)Did you compare performance over time? [No] We did not have enough resources formultiple experiments executions.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]As described in the Section 6, we’ve performed three runs of the forester and H2O trainingwith the random seeds set for the train, test, and validation splits as the values 123, 2137,and 21.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not have error bars on the visualizations, but we provideexact values without any statistical aggregations.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We useda tabular benchmark consisting of 8 datasets describing the binary classification tasks fromthe OpenML-CC18 benchmark, as described in Section 6.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] See Appendix B.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] During the experiments, all computa-tions were conducted by the AutoML frameworks, and no additional tuning was included.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] A full list of the citedpapers/tools is described in the references.10(b)Did you mention the license of the assets? [Yes] Used assets, mostly R packages, aredescribes in the Appendix D.(c)Did you include any new assets either in the supplemental material or as a url? [Yes]The forester package is a new asset https://github.com/ModelOriented/forester .(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [Yes] See Section 6, we are using OpenML-CC18 and its data. We cited alldata sources according to the guidelines of datasets on OpenML (and in OpenML-CC18).(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] Our data does not contain personally identifiableinformation or offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not do research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not do research with human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not do research with human subjects.Acknowledgements. We would like to thank Adrianna Grudzień and Patryk Słowakiewicz for theirdevelopment work on the forester package. We also thank Katarzyna Woźnica, Hubert Baniecki,Mikołaj Spytek, and Mateusz Krzyziński for their valuable comments about the study.ReferencesBavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. (2022).Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255 .Biecek, P. (2018). DALEX: Explainers for Complex Predictive Models in R. Journal of MachineLearning Research , 19(84):1–5.Biecek, P. and Burzykowski, T. (2021). Explanatory Model Analysis . Chapman and Hall/CRC, NewYork.Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn,J. N., and Vanschoren, J. (2021). OpenML benchmarking suites. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) .Buuren, S. and Groothuis-Oudshoorn, C. (2011). MICE: Multivariate Imputation by ChainedEquations in R. Journal of Statistical Software , 45.Caruana, R., Karampatziakis, N., and Yessenalina, A. (2008). An empirical evaluation of supervisedlearning in high dimensions. Proceedings of the 25th International Conference on Machine Learning ,pages 96–103.Chen, T. and Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16,page 785–794.11Fararni, K. A., Nafis, F., Aghoutane, B., Yahyaouy, A., Riffi, J., and Sabri, A. (2021). Hybrid recom-mender system for tourism based on big data and AI: A conceptual framework. Big Data Miningand Analytics , 4(1):47–55.Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2022). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. In Advances in Neural Information Processing Systems ,volume 28.Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why do tree-based models still outperformdeep learning on typical tabular data? In Thirty-sixth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track .Hothorn, T. and Zeileis, A. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R.Journal of Machine Learning Research , 16(118):3905–3909.Jorge, C. C., Antonio, O. A. J., Hugo, G. M. V., and Hugo, O. P. D. (2022). Machine Learning forPersonal Credit Evaluation: A Systematic Review. WSEAS TRANSACTIONS ON COMPUTERRESEARCH , 10:62–73.Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: AHighly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information ProcessingSystems , volume 30.Kursa, M. B. and Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal ofStatistical Software , 36(11):1–13.Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff,L., and Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R.Journal of Open Source Software , 4(44):1903.LeDell, E., Gill, N., Aiello, S., Fu, A., Candel, A., Click, C., Kraljevic, T., Nykodym, T., Aboyoun, P.,Kurka, M., and Malohlava, M. (2022). h2o: R Interface for the ’H2O’ Scalable Machine LearningPlatform . R package version 3.38.0.1.Molnar, C., Casalicchio, G., and Bischl, B. (2020). Interpretable machine learning – a brief history,state-of-the-art and challenges. In ECML PKDD 2020 Workshops , pages 417–431.Olson, R. S., Bartley, N., Urbanowicz, R. J., and Moore, J. H. (2016). Evaluation of a Tree-basedPipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic andEvolutionary Computation Conference 2016 , GECCO ’16, pages 485–492.Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. (2018). CatBoost:unbiased boosting with categorical features. In Advances in Neural Information ProcessingSystems , volume 31.R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation forStatistical Computing, Vienna, Austria.Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L., and Zurada, J. (2010). Artificial Intelligenceand Soft Computing, Part II: 10th International Conference, ICAISC 2010 .12Shimizu, H. and Nakayama, K. I. (2020). Artificial intelligence in oncology. Cancer Science ,111(5):1452–1460.Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machinelearning algorithms. In Advances in Neural Information Processing Systems , volume 25.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Vanschoren, J. (2019). Meta-Learning , pages 35–61. Springer International Publishing, Cham.Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). Openml: networked science inmachine learning. SIGKDD Explorations , 15(2):49–60.Vilalta, R., Giraud-Carrier, C., Brazdil, P., and Soares, C. (2004). Using meta-learning to supportdata mining. International Journal of Computer Science Applications , 1.Wirth, R. and Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining.Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discoveryand Data Mining .Woźnica, K. and Biecek, P. (2022). Towards explainable meta-learning. In Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases: International Workshops of ECMLPKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I , pages 505–520.Wright, M. N. and Ziegler, A. (2017). ranger: A Fast Implementation of Random Forests for HighDimensional Data in C++ and R. Journal of Statistical Software , 77(1):1–17.A Source CodeThe source code of the experiments, prepared visualizations, and tables from Appendix C isavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments as the forester_benchmark.Rmd file. The markdown notebook file describesthe installation process, and it can be safely executed with the guidance of our remarks betweenthe code chunks.B ResourcesAs mentioned in the Section 6, our team was limited in computational power. The experiment wasconducted on our private PC with 32GB of RAM, CPU: 11th Gen Intel(R) Core(TM) i7-11700KF @3.60GHz (16 cores), and the GPU: NVIDIA GeForce RTX 3070 Ti, however as the forester is not yetimplemented to work on the GPU, only the CPU was used.C Raw resultsIn this section we provide information about the raw results mentioned in the Section 6 which wereused in the Figure 2. Raw results for train, test, and validation datasets are available in the GitHubrepository https://github.com/ModelOriented/forester/tree/main/misc/experiments/raw_training_results . In this section we offer the results aggregated as the mean values of the metricswhich are presented in the Table 3, Table 4, and Table 5 for the binary classification tasks. Thesetables also broaden our perspective by providing AUC and F1 values. The results for the regressiontasks are presented in the Table 6, Table 7, and Table 8. These tables also broaden our perspectiveby providing MSE, R2, and MAE values.13Table 3: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification training datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.929 0.923 0.905blood-transfusion-service-center forester 0.77 0.752 1blood-transfusion-service-center H2O 0.7 0.682 0.519breast-w forester 1 1 1breast-w H2O 0.998 0.998 0.997credit-approval forester 0.999 1 1credit-approval H2O 0.961 0.959 0.955credit-g forester 0.967 0.998 1credit-g H2O 0.906 0.855 0.938diabetes forester 0.991 0.999 1diabetes H2O 0.874 0.871 0.826kr-vs-kp forester 1 1 1kr-vs-kp H2O 0.999 0.999 0.965phoneme forester 1 1 1phoneme H2O 1 1 1Table 4: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification testing datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 0.995 0.995 1banknote-authentication H2O 0.933 0.927 0.915blood-transfusion-service-center forester 0.796 0.772 0.976blood-transfusion-service-center H2O 0.713 0.707 0.54breast-w forester 0.976 0.984 0.986breast-w H2O 0.971 0.97 0.959credit-approval forester 0.885 0.931 0.942credit-approval H2O 0.882 0.882 0.87credit-g forester 0.733 0.79 0.865credit-g H2O 0.743 0.64 0.829diabetes forester 0.768 0.823 0.799diabetes H2O 0.753 0.727 0.643kr-vs-kp forester 0.994 0.999 0.991kr-vs-kp H2O 0.991 0.991 0.991phoneme forester 0.909 0.96 0.867phoneme H2O 0.904 0.895 0.84214Table 5: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification validation datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.916 0.908 0.887blood-transfusion-service-center forester 0.775 0.773 0.833blood-transfusion-service-center H2O 0.675 0.68 0.509breast-w forester 0.938 0.968 0.956breast-w H2O 0.967 0.97 0.953credit-approval forester 0.855 0.908 0.939credit-approval H2O 0.867 0.862 0.842credit-g forester 0.705 0.788 1credit-g H2O 0.758 0.635 0.846diabetes forester 0.747 0.803 0.866diabetes H2O 0.755 0.735 0.656kr-vs-kp forester 0.99 0.999 0.99kr-vs-kp H2O 0.99 0.99 0.99phoneme forester 0.901 0.954 0.851phoneme H2O 0.9 0.896 0.839Table 6: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression training datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.697 0.5 0.974 0.4232dplanes H2O 0.984 0.969 0.95 0.785bank32nh forester 0.001 0 1 0.001bank32nh H2O 0.054 0.003 0.806 0.037elevators forester 0.001 0 0.978 0.001elevators H2O 0.002 0 0.942 0.001kin8nm forester 0.012 0 0.997 0.009kin8nm H2O 0.066 0.004 0.937 0.051Mercedes_Benz_Greener_Manufacturing forester 2.456 6.13 0.963 0.775Mercedes_Benz_Greener_Manufacturing H2O 7.806 61.115 0.625 4.935pol forester 1.139 1.483 0.999 0.699pol H2O 1.803 3.251 0.998 0.829wine_quality forester 0.071 0.005 0.993 0.031wine_quality H2O 0.161 0.027 0.965 0.12415Table 7: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression testing datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 1.003 1.007 0.948 0.8022dplanes H2O 1.004 1.008 0.948 0.802bank32nh forester 0.08 0.006 0.548 0.053bank32nh H2O 0.076 0.006 0.599 0.05elevators forester 0.002 0 0.884 0.002elevators H2O 0.002 0 0.911 0.001kin8nm forester 0.113 0.013 0.816 0.087kin8nm H2O 0.084 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 7.554 57.195 0.626 5.039Mercedes_Benz_Greener_Manufacturing H2O 7.583 57.598 0.623 5.222pol forester 4.739 22.508 0.987 2.242pol H2O 3.198 10.278 0.994 1.3wine_quality forester 0.614 0.377 0.505 0.451wine_quality H2O 0.604 0.365 0.521 0.43Table 8: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression validation datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.999 0.997 0.948 0.7992dplanes H2O 1 0.999 0.948 0.8bank32nh forester 0.082 0.007 0.544 0.053bank32nh H2O 0.078 0.006 0.591 0.052elevators forester 0.002 0 0.875 0.002elevators H2O 0.002 0 0.907 0.001kin8nm forester 0.111 0.012 0.822 0.085kin8nm H2O 0.083 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 8.464 73.039 0.559 5.261Mercedes_Benz_Greener_Manufacturing H2O 8.458 72.911 0.56 5.373pol forester 4.379 19.256 0.989 1.885pol H2O 3.01 9.087 0.995 1.213wine_quality forester 0.632 0.399 0.478 0.466wine_quality H2O 0.624 0.389 0.492 0.447D Used assetsIn this section we describe the packages used for both forester , and the experiments. The packagesoutside of the forester required for the experiments are listed in the Table 9. Additional requirementfor the catboost andH2O packages is installed Java. The packages required by the forester as wellas their versions used during the experiment are presented in the Table 10.16Table 9: The packages and their versions under which the experiments were executed and supplementalmaterials were created.package version licensexlsx 0.6.5 GPL-3stringr 1.5.0 MITggbeeswarm 0.6.0 GPL (>= 2)dplyr 1.0.10 MITggplot2 3.4.0 MITtictoc 1.1 Apache License (== 2.0)H2O 3.38.0.1 Apache License (== 2.0)forester 1.2.1 GPL-3OpenML 1.12 BSD_3_clauseTable 10: The forester package’s dependencies and their versions used during the experiments.package version licenceBoruta 7.0.0 GPL (>= 2)catboost 1.1.1 Apache License (== 2.0)crayon 1.5.2 MITDALEX 2.4.2 GPLdata.table 1.14.2 MPL-2.0ggplot2 3.4.0 MITggradar 0.2 GPLggrepel 0.9.3 GPL-3knitr 1.40 GPLlightgbm 3.3.2 MITmice 3.14.0 GPL-2 | GPL-3mltools 0.3.5 MITParBayesianOptimization 1.2.4 GPL-2partykit 1.2-16 GPL-2 | GPL-3pROC 1.18.0 GPL (>= 3)ranger 0.14.1 GPL-3rcompanion 2.4.18 GPL-3rmarkdown 2.16 GPL-3splitTools 0.3.2 GPL (>= 2)testthat 3.1.6 MITtibble 3.1.8 MITtinytex 0.43 MITvarhandle 2.0.5 GPL (>= 2)xgboost 1.6.0.1 Apache License (== 2.0)stats 4.1.2 Part of R 4.1.217E Execution times comparisonIn this section we briefly explore the times needed for every experiment execution for both frame-works. The results presented in Table 11, and Table 12 show that final execution times differ, despitesetting exactly the same times for H2O experiment as the forester had. Our empirical results showthat the H2O runs lasted two times longer on average than the forester , which puts a differentlight on the comparison of the frameworks performance. Raw results needed for these tables areavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments/execution_times .Table 11: The comparison of mean execution times in seconds for the forester andH2O for binaryclassification experiments.task_name forester H2O difference relative differencebanknote-authentication 818.33 2521.33 -1703 0.28blood-transfusion-service-center 155.67 555.67 -400 0.26breast-w 451.33 797.33 -346 0.57credit-approval 805 1513 -708 0.53credit-g 2453 4234 -1781 0.58diabetes 1645.67 2643.67 -998 0.62kr-vs-kp 451.33 806.67 -355.33 0.57phoneme 2748.33 3695.33 -947 0.67Table 12: The comparison of mean execution times in seconds for the forester andH2O for regressionexperiments.task_name forester H2O difference relative difference2dplanes 401 1050.67 -649.67 0.38bank32nh 708.67 1214.67 -506 0.58elevators 720.33 1435.33 -715 0.5kin8nm 544.67 1564 -1019.33 0.35Mercedes_Benz_Greener_Manufacturing 848 1371.67 -523.67 0.61pol 756 1548.33 -792.33 0.49wine_quality 1317.33 2130 -812.67 0.63F Package comparisonWe have prepared a notebook showing the differences between the packages described in therelated work section. The document includes a comparison of package installation, a descriptionof available preprocessing, variable selection options, and model tuning. In addition, visual-izations, methods of explainable machine learning, report preparation, and reference to avail-able package documentation are described. We do not give a final assessment of the best pack-age because it could be subjective, but we expose the reader to criticism. Notebook is avail-able in the GitHub repository https://github.com/ModelOriented/forester/blob/main/misc/experiments/framework_comparison.Rmd .18Forester reportversion 1.2.12023-05-20 01:36:36This report contains details about the best trained model, table with metrics for every trained model, scatterplot for chosen metric and info about used data.The best modelsThis is the binary_clf task.The best model is: xgboost_RS_5 .The names of the models were created by a pattern Engine_TuningMethod_Id , where:•Engine describes the engine used for the training (random_forest, xgboost, decision_tree, lightgbm,catboost),•TuningMethod describes how the model was tuned (basic for basic parameters, RS for random search,bayes for Bayesian optimization),•Id for separating the random search parameters sets.More details about the best model are present at the end of the report.no. name accuracy auc f113 xgboost_RS_5 0.7919 0.8088 0.27917 ranger_RS_4 0.7785 0.6965 0.153818 lightgbm_RS_5 0.7785 0.7361 0.42112 xgboost_model 0.7718 0.7090 0.413814 lightgbm_RS_1 0.7718 0.7578 0.37044 ranger_RS_1 0.7651 0.7930 NaN6 ranger_RS_3 0.7651 0.7228 NaN10 xgboost_RS_2 0.7651 0.7801 NaN11 xgboost_RS_3 0.7651 0.7367 NaN16 lightgbm_RS_3 0.7651 0.7690 NaN21 lightgbm_bayes 0.7651 0.7340 0.36368 ranger_RS_5 0.7584 0.7579 0.052612 xgboost_RS_4 0.7517 0.6609 0.372919 ranger_bayes 0.7517 0.7333 0.244920 xgboost_bayes 0.7517 0.7409 0.24491 ranger_model 0.7450 0.7063 0.32143 lightgbm_model 0.7450 0.6842 0.38719 xgboost_RS_1 0.7450 0.6619 0.366715 lightgbm_RS_2 0.7181 0.6058 0.382417 lightgbm_RS_4 0.7181 0.6058 0.38241G Report exampleno. name accuracy auc f15 ranger_RS_2 0.7114 0.6929 0.2712Plots for all modelsxgboost_modellightgbm_RS_1xgboost_RS_5ranger_RS_4lightgbm_RS_500.51Metricaccuracyaucf1Model comparisonPlots for the best model - xgboost_RS_50.000.250.500.751.000.00 0.25 0.50 0.75 1.00specificitysensitivityROC Curve (AUC = 0.8088)6229112010 1TargetPredictionConfusion Matrix2Feature Importance for the best model - xgboost_RS_5xgb.Booster0.880 0.882 0.884 0.886 0.888V4V3V2V1Root mean square error (RMSE) loss after permutationscreated for the xgb.Booster modelFeature ImportanceDetails about data——————– CHECK DATA REPORT ——————–The dataset has 748 observations and 5 columns which names are:V1; V2; V3; V4; Class;With the target value described by a column: Class.No static columns.No duplicate columns.No target values are missing.No predictor values are missing.No issues with dimensionality.Strongly correlated, by Spearman rank, pairs of numerical values are:V2 - V3: 1;These observations migth be outliers due to their numerical columns values:1 10 116 342 496 497 498 499 5 500 501 503 504 505 506 518 529 747 748 ;Dataset is unbalanced with: 3.202247 proportion with 1 being a dominating class.3Columns names suggest that none of them are IDs.Columns data suggest that none of them are IDs.——————– CHECK DATA REPORT END ——————–The best model details------------ Xgboost model ------------Parametersniter: 20evaluation_log:iter : train_auc1 :2 :3 :4 :5 :6 :7 :8 :9 :10 :11 :12 :13 :14 :15 :16 :17 :18 :19 :20 :4
pwnGDu0zQu
Q3DWpGoX7PD
automl.cc/AutoML/2023/ABCD_Track
2023
forester: A Novel Approach to Accessible and Interpretable AutoML for Tree-Based Modeling
["Anna Kozak", "Hubert Ruczy\u0144ski"]
The majority of AutoML solutions are developed in Python. However, a large percentage of data scientists are associated with the R language. Unfortunately, there are limited R solutions available with high entry level which means they are not accessible to everyone. To fill this gap, we present the $\textit{forester}$ package, which offers ease of use regardless of the user's proficiency in the area of machine learning. The $\textit{forester}$ package is an open-source AutoML package implemented in R designed for training high-quality tree-based models on tabular data. It supports regression and binary classification tasks. A single line of code allows the use of unprocessed datasets, informs about potential issues concerning them, and handles feature engineering automatically. Moreover, hyperparameter tuning is performed by Bayesian optimization, which provides high-quality outcomes. The results are later served as a ranked list of models. Finally, the $\textit{forester}$ package offers a vast training report, including the ranked list, a comparison of trained models, and explanations for the best one.
["machine learning", "automated machine learning", "tree-based models", "automated reporting"]
forester: A Novel Approach to Accessible and InterpretableAutoML for Tree-Based ModelingAnna Kozak1Hubert Ruczyński11Warsaw University of TechnologyAbstract The majority of AutoML solutions are developed in Python. However, a large percentageof data scientists are associated with the R language. Unfortunately, there are limitedR solutions available with high entry level which means they are not accessible to everyone.To fill this gap, we present the forester package, which offers ease of use regardless of theuser’s proficiency in the area of machine learning.The forester package is an open-source AutoML package implemented in R designed fortraining high-quality tree-based models on tabular data. It supports regression and binaryclassification tasks. A single line of code allows the use of unprocessed datasets, informsabout potential issues concerning them, and handles feature engineering automatically.Moreover, hyperparameter tuning is performed by Bayesian optimization, which provideshigh-quality outcomes. The results are later served as a ranked list of models. Finally, theforester package offers a vast training report, including the ranked list, a comparison oftrained models, and explanations for the best one.1 IntroductionMachine learning is being used more and more in the world around us. Every day, models arecreated to assist doctors (Shimizu and Nakayama, 2020), financiers (Jorge et al., 2022), or tourists(Fararni et al., 2021). With the increasing demand for model building, research is being conductedon automatically developing tools to build artificial intelligence based solutions.Many types of models are used in machine learning, such as decision rules (scoring card model) tocomplex neural network structures modeling natural language (large language models, for example,ChatGPT (Bavarian et al., 2022)). Viewing machine learning in terms of tabular data, we havea wide range of models available, from decision trees and linear or logistic regression to randomforests, SVM, or neural networks. However, tree-based models are the most widely used; the mainreason behind this is their high predictive efficiency. A simple decision tree model gives relativelysatisfactory results, but using multiple trees to create a random forest allows significantly higherpredictive power (Caruana et al., 2008; Grinsztajn et al., 2022).Automating the process to build machine learning models can include many different components.For example, the CRoss Industry Standard Process for Data Mining (CRISP-DM) (Wirth and Hipp,2000) is the most common methodology for data mining, analytics, and data science projects. Butthe basic framework of an automatic machine learning system is the preparation of models basedon data entered by the user. This process can be extended in various directions; for example,a preliminary analysis of the given data can be taken care of to look for potential data errorsor outlier observations, i.e. exploratory data analysis. Another essential element may be thesearch space of the model’s hyperparameters. Optimization of hyperparameters can be based onsimple methods such as a predefined parameter grid or random search. Another way to selecthyperparameters is to use Bayesian optimization (Snoek et al., 2012) or meta-learning (Vilalta et al.,2004; Vanschoren, 2019; Woźnica and Biecek, 2022). After tuning the models with hyperparameteroptimization, the next step we can add is to analyze the results in the form of a leaderboardAutoML 2023 Workshop Track ©2023 the authors, released under CC BY 4.0or visualization. By extending with explanatory methods (Biecek and Burzykowski, 2021) andreporting, the entire machine learning process can be finalized.Automating the process of machine learning allows access to data science tools for people who arestarting in data analysis and modeling. At the same time, it is an improvement and speeds up thework of experienced data scientists, who can make at least baseline models using a single line ofcode.In this paper, we present the AutoML package written for the R (R Core Team, 2022) to createmodels for regression and binary classification tasks on tabular data. The main goals of the packageare: making the package easy to use, fully automating all the necessary steps inside the ML pipeline,and providing results that are easy to create, understand and allow diagnostics of the models.The availability of responsible machine learning methods in the solution allows the results ofcomplex models to be interpreted. Changing the focus from obtaining the best possible outcomesto the interpretability of the results is a novelty for the AutoML tools. The implementation of theforester package can be found in our GitHub repository1. The software is open source and containscomprehensive documentation with examples of use.2 Related worksPackages for AutoML are prevalent in Python. The first AutoML solutions like Auto-WEKA(Thornton et al., 2013), was followed by Auto-Sklearn (Feurer et al., 2015, 2022) and TPOT (Tree-Based Pipeline Optimization Tool) (Olson et al., 2016) which was one of the very first AutoMLmethods and open-source software packages developed for the data science community in Python.But in R, there are few approaches. One of them is the H2O package (LeDell et al., 2022). It isan open-source library that is an in-memory, distributed, fast, and scalable machine learningand predictive analytics platform that creates a ranked list of models easily exported for use ina production environment. The authors have created an easy-to-use interface that automates thetraining of multiple candidate models. H2O’s AutoML is also designed for more advanced users byproviding a simple wrapper function that performs many modeling tasks. H2O’s AutoML processautomatically trains models and tunes them at user-specified times. To better understand the qualityof models in H2O, we can rely on metrics such as R2and mean square error (MSE). For comparison,in the forester package, we can compare models using the most commonly used metrics or evendefine a new custom metric. What particularly distinguishes the forester package from H2O isthe preprocessing. In the latter’s case, it only includes target encoding and is in the experimentalstage. In the forester package, we have more accurate and extensive preprocessing. In addition,H2O always requires Java to work, so the user must also install it.The second widely-used framework is the mlr3 package (Lang et al., 2019) which provides a frame-work for classification, regression, survival analysis, and other ML tasks such as cluster analysis.It provides the ability to perform hyperparameter tuning and feature selection. The package iswell-documented, contains many functions and models, and provides many capabilities. However,it is different from a typical package for AutoML, as creating models requires knowledge of how todo it and some time to assemble such a model. It also has its drawbacks, such as the need for morepreprocessing, which would help to use it more easily, for example, the XGBoost model, whichhas to have only numerical data without factors. There is also no way to divide the collection intotraining, testing, and validation subsets. The mlr3 package provides functionality that builds onthe basic components of machine learning. It can be extended to include preprocessing, pipelining,visualization, additional learners, additional task types, and more. To create these properties, weneed to install many other libraries. In the forester package, we provide these components at once,and with a single function, we can perform preprocessing, prepare visualization of the results1https://github.com/ModelOriented/forester2Model training and tuningData checkData preparationDecisionmakingforesterfeaturesModel evaluationMissing values,Correlated features, Irrelevant columnsData splitting,Preprocessing,Data imputationDefault parameters,Random search,Bayesian OptimizationRanked list,Customizable metricssave(),report(),explain()(1)(2)(3)(4)Raw dataFigure 1: A diagram presenting the forester pipeline. The forester analyses poor-quality data with thein-built data check (1), which points to possible issues, and later data preparation (2) handlesthem during the preprocessing. In the next step, the models are trained with default andrandom searched parameters and tuned with a Bayesian optimization algorithm (3). In theend, trained models are evaluated (4) and presented as a ranked list. In addition, the packageoffers the user additional features.and generate a report. A more detailed comparison of the forester package with H2O andmlr3 ispresented in Appendix F.3forester AutoMLTheforester is an AutoML package automating the machine learning pipeline, starting from the datapreparation, through model training, to the interpretability of the results. This way, we minimize theuser’s time performing basic and often repetitive activities related to the machine-learning process.Despite the high automation of the pipeline shown in Figure 1, we expose multiple parameterswhich advanced data scientists can use to customize the model creation. The whole package relieson the four pillars described in this section.1.Data checkThe first one, called data check, concerns a data preparation phase. Data preparation is a crucialpart of the modeling process (Rutkowski et al., 2010), so we cannot blindly assume a single wayof transforming the data for all cases. Appropriate data preprocessing is crucial to buildinga model with a small error rate. To face that issue, we introduce a data check report summarizingthe dataset with some basic information and pointing out possible problems. Data problems canaffect the following modeling stages and be relevant to any model. The data check report pointsout id-like, duplicated, static, or highly correlated columns. Moreover, it points out the outliers,missing values, and the imbalance of the target. This way we can propose some simple heuristicdata preprocessing methods, yet more advanced users are able to fight the issues mentioned bystudying the data check report on their own.32.Data preparationPreparing the data for modeling is another crucial aspect after checking the data. It can bedone using a dedicated tool, but the forester package offers two general-purpose preprocessingmethods, basic and advanced. The main purpose of this function is to remove the need toprepare data manually differently for different types of models. The basic preparation consistsof the actions that are necessary for the package to work that is: the removal of static columns,binarization of the target variable, and imputation of the missing data using the MICE algorithm(Buuren and Groothuis-Oudshoorn, 2011). The advanced method additionally includes theremoval of id-like columns (features suspected of being id), removal of highly correlated columns(Spearman’s rank for the numerical features, and Crammer’s V rank for categorical features) aswell as feature selection with the BORUTA algorithm (Kursa and Rudnicki, 2010). Additionally,every model in the forester package requires a different data format which is also prepared insidethe main function.3.Model training and tuningTheforester package’s third and most important pillar is model training and tuning. Our solutionfocuses on the tree-based model family because of their high-quality performance for varioustabular data tasks. We’ve limited ourselves to 5 well-known engines with different strong andweak points, so they complement each other.We have included the basic decision tree from partykit package (Hothorn and Zeileis, 2015)as an extremely light engine, but mostly, we have focused on the ensemble models. The onlybagging representative is the random forest from the ranger package (Wright and Ziegler, 2017),which is reluctant to overfit.We have also considered three different boosting algorithms. The XGBoost model (Chen andGuestrin, 2016) is highly effective, but due to the need for one hot encoding, it suffers from theabundance of categorical features. However, the LightGBM model (Ke et al., 2017), which worksbest for medium and large datasets, has problems with the small ones. The last engine is theCatBoost (Prokhorenkova et al., 2018) which can achieve superior performance but requires theJava environment installed, which is a minor inconvenience.The models are trained with three approaches: using the default parameters, performing therandom search algorithm within the predefined parameter space, and running an advancedBayesian Optimization algorithm for fine-grained tuning. The first method is the baselinefor other models. With the second one, we can cheaply create multiple models and explorevarious parameter combinations. The best and most time-consuming method is the BayesianOptimization from the ParBayesianOptimization package. However, it is extremely useful forcomplex tasks.4.Model evaluationThe last pillar is the automatic evaluation of the trained models. The forester package assessesevery trained model by various metrics, such as accuracy, area under the receiver operatingcharacteristic curve (AUC), and F1 for the binary classification tasks, and Root Mean SquaredError (RMSE), Mean Absolute Error (MAE), or R2for the regression tasks. The results are laterpresented as a ranked list sorted by the outcomes (for example, ascending order for RMSE, anddescending for AUC). Moreover, the user can define their metrics and provide them for theevaluation phase.4forester featuresOne of the most important goals for the forester package is the convenience of use and helping theusers to focus more on analyzing the results instead of writing the code. To obtain such a user-friendly environment, the forester offers plenty of additional features useful for data scientists.44.1 Model explanationsIn recent years, interpretable machine learning has become a significant trend in machine learning.The tools providing interpretability such as DALEX (Biecek, 2018) or iml(Molnar et al., 2020)allow data scientists to explain how the models they create work, making it easier to detecttheir misbehavior. Models’ explainability also enhances trust in such tools, even in demandingenvironments like medical researchers. To support using explainable methods for the modelstrained by the forester , we have created a wrapper for the DALEX explainer compatible with ourpackage. This way, the user can easily create various explanations for the trained models.4.2 Saving the outcomesAnother crucial feature is the save function, which lets the user save the training output. Returnedforester object contains lots of information, such as preprocessed dataset, split datasets, split indexes,ranked lists for training, testing, and validation datasets, the predictions of the model, and muchmore. The abundance of objects makes it incredibly important to save the outcomes after thetime-consuming training process.4.3 Automated reportLast but not least, our solution offers an automatically generated report that helps users quicklyand easily analyze the training results. The main goal of this feature is to ensure that every useris able to easily assess the quality of the trained models. The report consists of basic informationabout the dataset, a data check report, a ranked list of the best ten models, and visualizationsconcerning model quality. An example report for the blood-transfusion-service-center dataset (fromthe OpenML-CC18 benchmark (Bischl et al., 2021)) is provided in Appendix G.The plots are divided into two groups; the first one compares the outcomes of different models,which helps to decide which model is the best. For example, guided by the radar chart comparisonplot, we can choose the model with slightly worse accuracy, but better AUC and F1 values.The second type of plots concentrates on the model with the best performance, and its mostprominent feature is providing a feature importance plot. This visualization lets us understandwhich variables are the most important for the model; thus, we can evaluate its correctness.It is worth noticing that the reports, mostly visualizations, are different for binary classificationand regression tasks as we measure their performance differently.5 User interface5.1 Training functionThe forester ’s main train() function runs the entire AutoML pipeline, including the data prepa-ration, model training, and evaluation. To keep the package as simple as possible, the functionrequires only the dataset and target column name (Listing 1); however, to keep the tool versatile,there are lots of custom parameters for more advanced users (Listing 2). With the latter option, theuser can specify the amount of Bayesian optimization iterations, the number of random searchevaluations, proportions of the train, test, and validation subsets, change the preprocessing methodsor even add their evaluation metric.train _ output←train ( data = lisbon , y = 'Price ')Listing 1: Training models with the forester package and default parameters.5train _ output←train ( data = lisbon ,y = 'Price ',verbose = TRUE ,engine = c( 'ranger ','xgboost ','decision _tree ','lightgbm ','catboost '),train _ test _ split = c(0.6 , 0.2 , 0.2) ,bayes _ iter = 10,random _ evals = 3,advanced _ preprocessing = FALSE ,metrics = 'auto ',sort _by = 'auto ',metric _ function = NULL ,metric _ function _ name = NULL ,metric _ function _ decreasing = TRUE ,best _ model _ number = 5)Listing 2: Training models with the forester package and custom parameters.5.2 Extensive featuresApart from the train() function, the user can utilize additional functions, which is helpful duringthe modeling process. The check_data() function (Listing 3) enables printing a data check reportoutside of the train() function. The save() function (Listing 4) lets us save the outcome of thetraining process, whereas the report() function (Listing 5) creates a training report. The lastextension is the explain() function (Listing 6), which creates a DALEX explainer that can be usedto generate multiple visualizations concerning the model interpretability with the DALEX package.check _ data ( data = `blood - transfusion - service - center `, y = 'Class ')Listing 3: Generating a data check report.save ( train _ output , name = 'train _ output .RData ')Listing 4: Saving the train output.report ( train _ output , 'report .pdf ')Listing 5: Generating a report from the train output.exp←explain ( models = train _ output $ best _ models [[1]] ,test _ data = train _ output $data ,y = train _ output $y,verbose = FALSE )Listing 6: Creating a model explainer, that lets us use functions from the DALEX package.6 PerformanceTo evaluate the performance of the package, we’ve decided to compare it to the H2O framework onthe binary classification tasks from the OpenML-CC18 benchmark (Bischl et al., 2021) and regressiontasks from OpenML (Vanschoren et al., 2013). Due to the limited computational resources, we havechosen a subset of 8 datasets for classification and 7 for regression described in Table 1 and Table2, respectively. The binary classification datasets consisted mainly of categorical variables andcontained many missing values, a significant obstacle for both solutions, whereas the regressiontasks had no missing values and mostly numeric or binary values.6During the experiment, we trained the forester package three times for each dataset with randomseeds provided for the data splitting function inside the forester . The same splits were later usedfor the H2O framework. A singular training iteration was executed for the decision tree, randomforest, LightGBM, and CatBoost engines with ten iterations of the Bayesian optimization and tenrandom search evaluations. For the regression task we’ve additionally added an XGboost engine.To ensure that both frameworks had the same amount of time, we have measured it for every forestertraining iteration, and provided it to the respective H2O AutoML runs. This H2O functionalitydidn’t work as supposed, and finally this framework had two times longer training time on average.This factor definitely improved the H2Os results, and we have to bear that in mind during theoutcomes comparison. For further details see Appendix E. Additionally, to ensure the same datasplit, we have used the indexes saved during the forester training. The source codes are included inAppendix A.The comparison of performance for both frameworks is presented in Figure 2 and Figure 3. Forthe raw results, as well as aggregated tabular ones, see Appendix C. As one can see, for thebinary classification task, the forester outperformed the H2O framework on five datasets: banknote-authentication ,blood-transfusion-service-centre ,credit-approval ,credit-g , and diabetes . The outcomesfor very simple datasets kr-vs-kp andbreast-w were similar, and H2O obtained better performancefor the phoneme data. For the regression tasks, the results were comparable to the H2O’s for mosttasks or slightly worse, as for the poldataset. The results show that the forester creates high-qualitymodels that are competitive with the existing solutions.However, our conclusions cannot be too far-fetched since we tested the package for only a few setsfor binary classification and regression tasks. We cannot say that the forester package’s predictivepower is better than H2O, but they clearly are competitive.Table 1: A subset of OpenML-CC18 benchmark datasets used during the evaluation process of theforester package, which are tabular data objects presenting the binary classification tasks.The features are mostly categorical, and they contain lots of missing values.Name Number of columns Number of rowskr-vs-kp 37 3196breast-w 10 699credit-approval 16 690credit-g 21 1000diabetes 9 768phoneme 6 5404banknote-authentication 5 1372blood-transfusion-service-center 5 748Table 2: A subset of OpenML datasets used during the evaluation process of the forester package,which are tabular data objects presenting the regression tasks. In this case there were nomissing values, and the features were mostly numerical or binary.Name Number of columns Number of rowsbank32nh 33 8192wine_quality 12 6497Mercedes_Benz_Greener_Manufacturing 378 4209kin8nm 9 8192pol 49 150002dplanes 11 40768elevators 19 165997banknoteauthenticationbloodtransfusionservicecenterbreastwcreditapprovalcreditgdiabeteskr vs kpphoneme0.5 0.6 0.7 0.8 0.9 1.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainAccuracyDatasetFramework forester for the binary classification taskPerformance comparison of forester and H2OH2OFigure 2: Performance comparison for forester and H2O frameworks for the datasets described inTable 1. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot clearlyshows us that the forester performs better than the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.2dplanesbank32nhelevatorskin8nmMercedes_Benz_Greener_Manufacturingpolwine_quality0.0 2.5 5.0 7.5 10.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainRMSEDatasetfor the regression taskPerformance comparison of forester and H2OFramework forester H2OFigure 3: Performance comparison for forester and H2O frameworks for the datasets described inTable 2. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot showsus that the forester performs comparably to the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.7 Limitations and Broader Impact StatementThe forester package has limitations in the availability of models. The library contains only tree-based models, but this family proves to be extremely versatile. Only binary classification andregression are available in the current version of the package. Preparing models for multi-criteriaclassification, cluster analysis, or survival analysis is currently impossible. However, these featurescan be easily implemented in the future. The package currently performs better with smallerdatasets; a large allocation of memory and time is needed for large and complex data.8One of the strongest points of the forester package is being incredibly easy to use, even if we donot have broad machine learning expertise. This approach, however, raises the risk that the modelstrained with the package will be of poor quality, for example, due to the training on a low-qualitydataset, or that the outcomes will be misunderstood or incorrectly interpreted by the inexperienceduser. The reporting module addresses all of these responsible machine learning concerns, whichinforms about possible issues with the data, measures the quality of the models, and provides theirexplanations.8 ConclusionsThis paper presents an R package for AutoML, creating models for regression and binary classifica-tion tasks conducted on tabular data. Our solution addresses the needs we have observed in AutoMLtools in various programming languages. The main goals of the package are to keep the packagestable and easy to use, to automate all the necessary steps inside the ML pipeline, and to provideresults that are easy to create, understand and allow for diagnostics of the models. To achieve theseresults, we have focused only on the best representatives from the family of tree-based modelsthat show superiority over other methods on tabular data. Furthermore, we provide additionalfunctions that allow the user to save the models, create explanations and create a report describingthe learning process and explaining the developed models. Experiments carried out tentativelyindicate that more predictive power is obtained using our solution than currently existing solutionsin R.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] We introduced the forester package and described itspotential. The Section 3 and Section 4 describe the various features.(b) Did you describe the limitations of your work? [Yes] See Section 7.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 7.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] We believe thatour paper conforms to the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We have notheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We have no theoreticalresults.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an in-structive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] See Appendix A.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] The most important results analyzed in this paper are presented or mentioned(via a link) in the Appendix C.9(c)Did you include scripts and commands that can be used to generate the figures and tables inyour paper based on the raw results of the code, data, and instructions given? [Yes] The codeis available on the package’s GitHub repository in the form of R Markdown notebook, seeAppendix A.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] The code is available on the package’s GitHubrepository in the form of R Markdown notebook, see Appendix A.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces,fixed hyperparameter settings, and how they were chosen)? [Yes] The training details arementioned in the main paper Section 6, as well as in the source code described in AppendixA.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] The methods were compared on the same train, test,and validation subsets, and the hyperparameter search space was the default one for eachAutoML framework.(g)Did you run ablation studies to assess the impact of different components of your approach?[No] The package at this point is pretty straightforward and doesn’t contain many com-ponents that could alter the outcomes. A possible ablation study could be applied to theadvanced preprocessing method, however, we did not have enough computational powerfor running the benchmark again.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] The modelswere compared by the same metrics for classification: accuracy, AUC and F1 and forregression: RMSE, MSE, R2i MAE.(i)Did you compare performance over time? [No] We did not have enough resources formultiple experiments executions.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]As described in the Section 6, we’ve performed three runs of the forester and H2O trainingwith the random seeds set for the train, test, and validation splits as the values 123, 2137,and 21.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not have error bars on the visualizations, but we provideexact values without any statistical aggregations.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We useda tabular benchmark consisting of 8 datasets describing the binary classification tasks fromthe OpenML-CC18 benchmark, as described in Section 6.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] See Appendix B.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] During the experiments, all computa-tions were conducted by the AutoML frameworks, and no additional tuning was included.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] A full list of the citedpapers/tools is described in the references.10(b)Did you mention the license of the assets? [Yes] Used assets, mostly R packages, aredescribes in the Appendix D.(c)Did you include any new assets either in the supplemental material or as a url? [Yes]The forester package is a new asset https://github.com/ModelOriented/forester .(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [Yes] See Section 6, we are using OpenML-CC18 and its data. We cited alldata sources according to the guidelines of datasets on OpenML (and in OpenML-CC18).(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] Our data does not contain personally identifiableinformation or offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not do research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not do research with human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not do research with human subjects.Acknowledgements. We would like to thank Adrianna Grudzień and Patryk Słowakiewicz for theirdevelopment work on the forester package. We also thank Katarzyna Woźnica, Hubert Baniecki,Mikołaj Spytek, and Mateusz Krzyziński for their valuable comments about the study.ReferencesBavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. (2022).Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255 .Biecek, P. (2018). DALEX: Explainers for Complex Predictive Models in R. Journal of MachineLearning Research , 19(84):1–5.Biecek, P. and Burzykowski, T. (2021). Explanatory Model Analysis . Chapman and Hall/CRC, NewYork.Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn,J. N., and Vanschoren, J. (2021). OpenML benchmarking suites. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) .Buuren, S. and Groothuis-Oudshoorn, C. (2011). MICE: Multivariate Imputation by ChainedEquations in R. Journal of Statistical Software , 45.Caruana, R., Karampatziakis, N., and Yessenalina, A. (2008). An empirical evaluation of supervisedlearning in high dimensions. Proceedings of the 25th International Conference on Machine Learning ,pages 96–103.Chen, T. and Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16,page 785–794.11Fararni, K. A., Nafis, F., Aghoutane, B., Yahyaouy, A., Riffi, J., and Sabri, A. (2021). Hybrid recom-mender system for tourism based on big data and AI: A conceptual framework. Big Data Miningand Analytics , 4(1):47–55.Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2022). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. In Advances in Neural Information Processing Systems ,volume 28.Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why do tree-based models still outperformdeep learning on typical tabular data? In Thirty-sixth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track .Hothorn, T. and Zeileis, A. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R.Journal of Machine Learning Research , 16(118):3905–3909.Jorge, C. C., Antonio, O. A. J., Hugo, G. M. V., and Hugo, O. P. D. (2022). Machine Learning forPersonal Credit Evaluation: A Systematic Review. WSEAS TRANSACTIONS ON COMPUTERRESEARCH , 10:62–73.Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: AHighly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information ProcessingSystems , volume 30.Kursa, M. B. and Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal ofStatistical Software , 36(11):1–13.Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff,L., and Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R.Journal of Open Source Software , 4(44):1903.LeDell, E., Gill, N., Aiello, S., Fu, A., Candel, A., Click, C., Kraljevic, T., Nykodym, T., Aboyoun, P.,Kurka, M., and Malohlava, M. (2022). h2o: R Interface for the ’H2O’ Scalable Machine LearningPlatform . R package version 3.38.0.1.Molnar, C., Casalicchio, G., and Bischl, B. (2020). Interpretable machine learning – a brief history,state-of-the-art and challenges. In ECML PKDD 2020 Workshops , pages 417–431.Olson, R. S., Bartley, N., Urbanowicz, R. J., and Moore, J. H. (2016). Evaluation of a Tree-basedPipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic andEvolutionary Computation Conference 2016 , GECCO ’16, pages 485–492.Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. (2018). CatBoost:unbiased boosting with categorical features. In Advances in Neural Information ProcessingSystems , volume 31.R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation forStatistical Computing, Vienna, Austria.Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L., and Zurada, J. (2010). Artificial Intelligenceand Soft Computing, Part II: 10th International Conference, ICAISC 2010 .12Shimizu, H. and Nakayama, K. I. (2020). Artificial intelligence in oncology. Cancer Science ,111(5):1452–1460.Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machinelearning algorithms. In Advances in Neural Information Processing Systems , volume 25.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Vanschoren, J. (2019). Meta-Learning , pages 35–61. Springer International Publishing, Cham.Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). Openml: networked science inmachine learning. SIGKDD Explorations , 15(2):49–60.Vilalta, R., Giraud-Carrier, C., Brazdil, P., and Soares, C. (2004). Using meta-learning to supportdata mining. International Journal of Computer Science Applications , 1.Wirth, R. and Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining.Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discoveryand Data Mining .Woźnica, K. and Biecek, P. (2022). Towards explainable meta-learning. In Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases: International Workshops of ECMLPKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I , pages 505–520.Wright, M. N. and Ziegler, A. (2017). ranger: A Fast Implementation of Random Forests for HighDimensional Data in C++ and R. Journal of Statistical Software , 77(1):1–17.A Source CodeThe source code of the experiments, prepared visualizations, and tables from Appendix C isavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments as the forester_benchmark.Rmd file. The markdown notebook file describesthe installation process, and it can be safely executed with the guidance of our remarks betweenthe code chunks.B ResourcesAs mentioned in the Section 6, our team was limited in computational power. The experiment wasconducted on our private PC with 32GB of RAM, CPU: 11th Gen Intel(R) Core(TM) i7-11700KF @3.60GHz (16 cores), and the GPU: NVIDIA GeForce RTX 3070 Ti, however as the forester is not yetimplemented to work on the GPU, only the CPU was used.C Raw resultsIn this section we provide information about the raw results mentioned in the Section 6 which wereused in the Figure 2. Raw results for train, test, and validation datasets are available in the GitHubrepository https://github.com/ModelOriented/forester/tree/main/misc/experiments/raw_training_results . In this section we offer the results aggregated as the mean values of the metricswhich are presented in the Table 3, Table 4, and Table 5 for the binary classification tasks. Thesetables also broaden our perspective by providing AUC and F1 values. The results for the regressiontasks are presented in the Table 6, Table 7, and Table 8. These tables also broaden our perspectiveby providing MSE, R2, and MAE values.13Table 3: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification training datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.929 0.923 0.905blood-transfusion-service-center forester 0.77 0.752 1blood-transfusion-service-center H2O 0.7 0.682 0.519breast-w forester 1 1 1breast-w H2O 0.998 0.998 0.997credit-approval forester 0.999 1 1credit-approval H2O 0.961 0.959 0.955credit-g forester 0.967 0.998 1credit-g H2O 0.906 0.855 0.938diabetes forester 0.991 0.999 1diabetes H2O 0.874 0.871 0.826kr-vs-kp forester 1 1 1kr-vs-kp H2O 0.999 0.999 0.965phoneme forester 1 1 1phoneme H2O 1 1 1Table 4: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification testing datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 0.995 0.995 1banknote-authentication H2O 0.933 0.927 0.915blood-transfusion-service-center forester 0.796 0.772 0.976blood-transfusion-service-center H2O 0.713 0.707 0.54breast-w forester 0.976 0.984 0.986breast-w H2O 0.971 0.97 0.959credit-approval forester 0.885 0.931 0.942credit-approval H2O 0.882 0.882 0.87credit-g forester 0.733 0.79 0.865credit-g H2O 0.743 0.64 0.829diabetes forester 0.768 0.823 0.799diabetes H2O 0.753 0.727 0.643kr-vs-kp forester 0.994 0.999 0.991kr-vs-kp H2O 0.991 0.991 0.991phoneme forester 0.909 0.96 0.867phoneme H2O 0.904 0.895 0.84214Table 5: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification validation datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.916 0.908 0.887blood-transfusion-service-center forester 0.775 0.773 0.833blood-transfusion-service-center H2O 0.675 0.68 0.509breast-w forester 0.938 0.968 0.956breast-w H2O 0.967 0.97 0.953credit-approval forester 0.855 0.908 0.939credit-approval H2O 0.867 0.862 0.842credit-g forester 0.705 0.788 1credit-g H2O 0.758 0.635 0.846diabetes forester 0.747 0.803 0.866diabetes H2O 0.755 0.735 0.656kr-vs-kp forester 0.99 0.999 0.99kr-vs-kp H2O 0.99 0.99 0.99phoneme forester 0.901 0.954 0.851phoneme H2O 0.9 0.896 0.839Table 6: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression training datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.697 0.5 0.974 0.4232dplanes H2O 0.984 0.969 0.95 0.785bank32nh forester 0.001 0 1 0.001bank32nh H2O 0.054 0.003 0.806 0.037elevators forester 0.001 0 0.978 0.001elevators H2O 0.002 0 0.942 0.001kin8nm forester 0.012 0 0.997 0.009kin8nm H2O 0.066 0.004 0.937 0.051Mercedes_Benz_Greener_Manufacturing forester 2.456 6.13 0.963 0.775Mercedes_Benz_Greener_Manufacturing H2O 7.806 61.115 0.625 4.935pol forester 1.139 1.483 0.999 0.699pol H2O 1.803 3.251 0.998 0.829wine_quality forester 0.071 0.005 0.993 0.031wine_quality H2O 0.161 0.027 0.965 0.12415Table 7: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression testing datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 1.003 1.007 0.948 0.8022dplanes H2O 1.004 1.008 0.948 0.802bank32nh forester 0.08 0.006 0.548 0.053bank32nh H2O 0.076 0.006 0.599 0.05elevators forester 0.002 0 0.884 0.002elevators H2O 0.002 0 0.911 0.001kin8nm forester 0.113 0.013 0.816 0.087kin8nm H2O 0.084 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 7.554 57.195 0.626 5.039Mercedes_Benz_Greener_Manufacturing H2O 7.583 57.598 0.623 5.222pol forester 4.739 22.508 0.987 2.242pol H2O 3.198 10.278 0.994 1.3wine_quality forester 0.614 0.377 0.505 0.451wine_quality H2O 0.604 0.365 0.521 0.43Table 8: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression validation datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.999 0.997 0.948 0.7992dplanes H2O 1 0.999 0.948 0.8bank32nh forester 0.082 0.007 0.544 0.053bank32nh H2O 0.078 0.006 0.591 0.052elevators forester 0.002 0 0.875 0.002elevators H2O 0.002 0 0.907 0.001kin8nm forester 0.111 0.012 0.822 0.085kin8nm H2O 0.083 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 8.464 73.039 0.559 5.261Mercedes_Benz_Greener_Manufacturing H2O 8.458 72.911 0.56 5.373pol forester 4.379 19.256 0.989 1.885pol H2O 3.01 9.087 0.995 1.213wine_quality forester 0.632 0.399 0.478 0.466wine_quality H2O 0.624 0.389 0.492 0.447D Used assetsIn this section we describe the packages used for both forester , and the experiments. The packagesoutside of the forester required for the experiments are listed in the Table 9. Additional requirementfor the catboost andH2O packages is installed Java. The packages required by the forester as wellas their versions used during the experiment are presented in the Table 10.16Table 9: The packages and their versions under which the experiments were executed and supplementalmaterials were created.package version licensexlsx 0.6.5 GPL-3stringr 1.5.0 MITggbeeswarm 0.6.0 GPL (>= 2)dplyr 1.0.10 MITggplot2 3.4.0 MITtictoc 1.1 Apache License (== 2.0)H2O 3.38.0.1 Apache License (== 2.0)forester 1.2.1 GPL-3OpenML 1.12 BSD_3_clauseTable 10: The forester package’s dependencies and their versions used during the experiments.package version licenceBoruta 7.0.0 GPL (>= 2)catboost 1.1.1 Apache License (== 2.0)crayon 1.5.2 MITDALEX 2.4.2 GPLdata.table 1.14.2 MPL-2.0ggplot2 3.4.0 MITggradar 0.2 GPLggrepel 0.9.3 GPL-3knitr 1.40 GPLlightgbm 3.3.2 MITmice 3.14.0 GPL-2 | GPL-3mltools 0.3.5 MITParBayesianOptimization 1.2.4 GPL-2partykit 1.2-16 GPL-2 | GPL-3pROC 1.18.0 GPL (>= 3)ranger 0.14.1 GPL-3rcompanion 2.4.18 GPL-3rmarkdown 2.16 GPL-3splitTools 0.3.2 GPL (>= 2)testthat 3.1.6 MITtibble 3.1.8 MITtinytex 0.43 MITvarhandle 2.0.5 GPL (>= 2)xgboost 1.6.0.1 Apache License (== 2.0)stats 4.1.2 Part of R 4.1.217E Execution times comparisonIn this section we briefly explore the times needed for every experiment execution for both frame-works. The results presented in Table 11, and Table 12 show that final execution times differ, despitesetting exactly the same times for H2O experiment as the forester had. Our empirical results showthat the H2O runs lasted two times longer on average than the forester , which puts a differentlight on the comparison of the frameworks performance. Raw results needed for these tables areavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments/execution_times .Table 11: The comparison of mean execution times in seconds for the forester andH2O for binaryclassification experiments.task_name forester H2O difference relative differencebanknote-authentication 818.33 2521.33 -1703 0.28blood-transfusion-service-center 155.67 555.67 -400 0.26breast-w 451.33 797.33 -346 0.57credit-approval 805 1513 -708 0.53credit-g 2453 4234 -1781 0.58diabetes 1645.67 2643.67 -998 0.62kr-vs-kp 451.33 806.67 -355.33 0.57phoneme 2748.33 3695.33 -947 0.67Table 12: The comparison of mean execution times in seconds for the forester andH2O for regressionexperiments.task_name forester H2O difference relative difference2dplanes 401 1050.67 -649.67 0.38bank32nh 708.67 1214.67 -506 0.58elevators 720.33 1435.33 -715 0.5kin8nm 544.67 1564 -1019.33 0.35Mercedes_Benz_Greener_Manufacturing 848 1371.67 -523.67 0.61pol 756 1548.33 -792.33 0.49wine_quality 1317.33 2130 -812.67 0.63F Package comparisonWe have prepared a notebook showing the differences between the packages described in therelated work section. The document includes a comparison of package installation, a descriptionof available preprocessing, variable selection options, and model tuning. In addition, visual-izations, methods of explainable machine learning, report preparation, and reference to avail-able package documentation are described. We do not give a final assessment of the best pack-age because it could be subjective, but we expose the reader to criticism. Notebook is avail-able in the GitHub repository https://github.com/ModelOriented/forester/blob/main/misc/experiments/framework_comparison.Rmd .18Forester reportversion 1.2.12023-05-20 01:36:36This report contains details about the best trained model, table with metrics for every trained model, scatterplot for chosen metric and info about used data.The best modelsThis is the binary_clf task.The best model is: xgboost_RS_5 .The names of the models were created by a pattern Engine_TuningMethod_Id , where:•Engine describes the engine used for the training (random_forest, xgboost, decision_tree, lightgbm,catboost),•TuningMethod describes how the model was tuned (basic for basic parameters, RS for random search,bayes for Bayesian optimization),•Id for separating the random search parameters sets.More details about the best model are present at the end of the report.no. name accuracy auc f113 xgboost_RS_5 0.7919 0.8088 0.27917 ranger_RS_4 0.7785 0.6965 0.153818 lightgbm_RS_5 0.7785 0.7361 0.42112 xgboost_model 0.7718 0.7090 0.413814 lightgbm_RS_1 0.7718 0.7578 0.37044 ranger_RS_1 0.7651 0.7930 NaN6 ranger_RS_3 0.7651 0.7228 NaN10 xgboost_RS_2 0.7651 0.7801 NaN11 xgboost_RS_3 0.7651 0.7367 NaN16 lightgbm_RS_3 0.7651 0.7690 NaN21 lightgbm_bayes 0.7651 0.7340 0.36368 ranger_RS_5 0.7584 0.7579 0.052612 xgboost_RS_4 0.7517 0.6609 0.372919 ranger_bayes 0.7517 0.7333 0.244920 xgboost_bayes 0.7517 0.7409 0.24491 ranger_model 0.7450 0.7063 0.32143 lightgbm_model 0.7450 0.6842 0.38719 xgboost_RS_1 0.7450 0.6619 0.366715 lightgbm_RS_2 0.7181 0.6058 0.382417 lightgbm_RS_4 0.7181 0.6058 0.38241G Report exampleno. name accuracy auc f15 ranger_RS_2 0.7114 0.6929 0.2712Plots for all modelsxgboost_modellightgbm_RS_1xgboost_RS_5ranger_RS_4lightgbm_RS_500.51Metricaccuracyaucf1Model comparisonPlots for the best model - xgboost_RS_50.000.250.500.751.000.00 0.25 0.50 0.75 1.00specificitysensitivityROC Curve (AUC = 0.8088)6229112010 1TargetPredictionConfusion Matrix2Feature Importance for the best model - xgboost_RS_5xgb.Booster0.880 0.882 0.884 0.886 0.888V4V3V2V1Root mean square error (RMSE) loss after permutationscreated for the xgb.Booster modelFeature ImportanceDetails about data——————– CHECK DATA REPORT ——————–The dataset has 748 observations and 5 columns which names are:V1; V2; V3; V4; Class;With the target value described by a column: Class.No static columns.No duplicate columns.No target values are missing.No predictor values are missing.No issues with dimensionality.Strongly correlated, by Spearman rank, pairs of numerical values are:V2 - V3: 1;These observations migth be outliers due to their numerical columns values:1 10 116 342 496 497 498 499 5 500 501 503 504 505 506 518 529 747 748 ;Dataset is unbalanced with: 3.202247 proportion with 1 being a dominating class.3Columns names suggest that none of them are IDs.Columns data suggest that none of them are IDs.——————– CHECK DATA REPORT END ——————–The best model details------------ Xgboost model ------------Parametersniter: 20evaluation_log:iter : train_auc1 :2 :3 :4 :5 :6 :7 :8 :9 :10 :11 :12 :13 :14 :15 :16 :17 :18 :19 :20 :4
o1S45iUH4H
Q3DWpGoX7PD
automl.cc/AutoML/2023/ABCD_Track
2023
forester: A Novel Approach to Accessible and Interpretable AutoML for Tree-Based Modeling
["Anna Kozak", "Hubert Ruczy\u0144ski"]
The majority of AutoML solutions are developed in Python. However, a large percentage of data scientists are associated with the R language. Unfortunately, there are limited R solutions available with high entry level which means they are not accessible to everyone. To fill this gap, we present the $\textit{forester}$ package, which offers ease of use regardless of the user's proficiency in the area of machine learning. The $\textit{forester}$ package is an open-source AutoML package implemented in R designed for training high-quality tree-based models on tabular data. It supports regression and binary classification tasks. A single line of code allows the use of unprocessed datasets, informs about potential issues concerning them, and handles feature engineering automatically. Moreover, hyperparameter tuning is performed by Bayesian optimization, which provides high-quality outcomes. The results are later served as a ranked list of models. Finally, the $\textit{forester}$ package offers a vast training report, including the ranked list, a comparison of trained models, and explanations for the best one.
["machine learning", "automated machine learning", "tree-based models", "automated reporting"]
forester: A Novel Approach to Accessible and InterpretableAutoML for Tree-Based ModelingAnna Kozak1Hubert Ruczyński11Warsaw University of TechnologyAbstract The majority of AutoML solutions are developed in Python. However, a large percentageof data scientists are associated with the R language. Unfortunately, there are limitedR solutions available with high entry level which means they are not accessible to everyone.To fill this gap, we present the forester package, which offers ease of use regardless of theuser’s proficiency in the area of machine learning.The forester package is an open-source AutoML package implemented in R designed fortraining high-quality tree-based models on tabular data. It supports regression and binaryclassification tasks. A single line of code allows the use of unprocessed datasets, informsabout potential issues concerning them, and handles feature engineering automatically.Moreover, hyperparameter tuning is performed by Bayesian optimization, which provideshigh-quality outcomes. The results are later served as a ranked list of models. Finally, theforester package offers a vast training report, including the ranked list, a comparison oftrained models, and explanations for the best one.1 IntroductionMachine learning is being used more and more in the world around us. Every day, models arecreated to assist doctors (Shimizu and Nakayama, 2020), financiers (Jorge et al., 2022), or tourists(Fararni et al., 2021). With the increasing demand for model building, research is being conductedon automatically developing tools to build artificial intelligence based solutions.Many types of models are used in machine learning, such as decision rules (scoring card model) tocomplex neural network structures modeling natural language (large language models, for example,ChatGPT (Bavarian et al., 2022)). Viewing machine learning in terms of tabular data, we havea wide range of models available, from decision trees and linear or logistic regression to randomforests, SVM, or neural networks. However, tree-based models are the most widely used; the mainreason behind this is their high predictive efficiency. A simple decision tree model gives relativelysatisfactory results, but using multiple trees to create a random forest allows significantly higherpredictive power (Caruana et al., 2008; Grinsztajn et al., 2022).Automating the process to build machine learning models can include many different components.For example, the CRoss Industry Standard Process for Data Mining (CRISP-DM) (Wirth and Hipp,2000) is the most common methodology for data mining, analytics, and data science projects. Butthe basic framework of an automatic machine learning system is the preparation of models basedon data entered by the user. This process can be extended in various directions; for example,a preliminary analysis of the given data can be taken care of to look for potential data errorsor outlier observations, i.e. exploratory data analysis. Another essential element may be thesearch space of the model’s hyperparameters. Optimization of hyperparameters can be based onsimple methods such as a predefined parameter grid or random search. Another way to selecthyperparameters is to use Bayesian optimization (Snoek et al., 2012) or meta-learning (Vilalta et al.,2004; Vanschoren, 2019; Woźnica and Biecek, 2022). After tuning the models with hyperparameteroptimization, the next step we can add is to analyze the results in the form of a leaderboardAutoML 2023 Workshop Track ©2023 the authors, released under CC BY 4.0or visualization. By extending with explanatory methods (Biecek and Burzykowski, 2021) andreporting, the entire machine learning process can be finalized.Automating the process of machine learning allows access to data science tools for people who arestarting in data analysis and modeling. At the same time, it is an improvement and speeds up thework of experienced data scientists, who can make at least baseline models using a single line ofcode.In this paper, we present the AutoML package written for the R (R Core Team, 2022) to createmodels for regression and binary classification tasks on tabular data. The main goals of the packageare: making the package easy to use, fully automating all the necessary steps inside the ML pipeline,and providing results that are easy to create, understand and allow diagnostics of the models.The availability of responsible machine learning methods in the solution allows the results ofcomplex models to be interpreted. Changing the focus from obtaining the best possible outcomesto the interpretability of the results is a novelty for the AutoML tools. The implementation of theforester package can be found in our GitHub repository1. The software is open source and containscomprehensive documentation with examples of use.2 Related worksPackages for AutoML are prevalent in Python. The first AutoML solutions like Auto-WEKA(Thornton et al., 2013), was followed by Auto-Sklearn (Feurer et al., 2015, 2022) and TPOT (Tree-Based Pipeline Optimization Tool) (Olson et al., 2016) which was one of the very first AutoMLmethods and open-source software packages developed for the data science community in Python.But in R, there are few approaches. One of them is the H2O package (LeDell et al., 2022). It isan open-source library that is an in-memory, distributed, fast, and scalable machine learningand predictive analytics platform that creates a ranked list of models easily exported for use ina production environment. The authors have created an easy-to-use interface that automates thetraining of multiple candidate models. H2O’s AutoML is also designed for more advanced users byproviding a simple wrapper function that performs many modeling tasks. H2O’s AutoML processautomatically trains models and tunes them at user-specified times. To better understand the qualityof models in H2O, we can rely on metrics such as R2and mean square error (MSE). For comparison,in the forester package, we can compare models using the most commonly used metrics or evendefine a new custom metric. What particularly distinguishes the forester package from H2O isthe preprocessing. In the latter’s case, it only includes target encoding and is in the experimentalstage. In the forester package, we have more accurate and extensive preprocessing. In addition,H2O always requires Java to work, so the user must also install it.The second widely-used framework is the mlr3 package (Lang et al., 2019) which provides a frame-work for classification, regression, survival analysis, and other ML tasks such as cluster analysis.It provides the ability to perform hyperparameter tuning and feature selection. The package iswell-documented, contains many functions and models, and provides many capabilities. However,it is different from a typical package for AutoML, as creating models requires knowledge of how todo it and some time to assemble such a model. It also has its drawbacks, such as the need for morepreprocessing, which would help to use it more easily, for example, the XGBoost model, whichhas to have only numerical data without factors. There is also no way to divide the collection intotraining, testing, and validation subsets. The mlr3 package provides functionality that builds onthe basic components of machine learning. It can be extended to include preprocessing, pipelining,visualization, additional learners, additional task types, and more. To create these properties, weneed to install many other libraries. In the forester package, we provide these components at once,and with a single function, we can perform preprocessing, prepare visualization of the results1https://github.com/ModelOriented/forester2Model training and tuningData checkData preparationDecisionmakingforesterfeaturesModel evaluationMissing values,Correlated features, Irrelevant columnsData splitting,Preprocessing,Data imputationDefault parameters,Random search,Bayesian OptimizationRanked list,Customizable metricssave(),report(),explain()(1)(2)(3)(4)Raw dataFigure 1: A diagram presenting the forester pipeline. The forester analyses poor-quality data with thein-built data check (1), which points to possible issues, and later data preparation (2) handlesthem during the preprocessing. In the next step, the models are trained with default andrandom searched parameters and tuned with a Bayesian optimization algorithm (3). In theend, trained models are evaluated (4) and presented as a ranked list. In addition, the packageoffers the user additional features.and generate a report. A more detailed comparison of the forester package with H2O andmlr3 ispresented in Appendix F.3forester AutoMLTheforester is an AutoML package automating the machine learning pipeline, starting from the datapreparation, through model training, to the interpretability of the results. This way, we minimize theuser’s time performing basic and often repetitive activities related to the machine-learning process.Despite the high automation of the pipeline shown in Figure 1, we expose multiple parameterswhich advanced data scientists can use to customize the model creation. The whole package relieson the four pillars described in this section.1.Data checkThe first one, called data check, concerns a data preparation phase. Data preparation is a crucialpart of the modeling process (Rutkowski et al., 2010), so we cannot blindly assume a single wayof transforming the data for all cases. Appropriate data preprocessing is crucial to buildinga model with a small error rate. To face that issue, we introduce a data check report summarizingthe dataset with some basic information and pointing out possible problems. Data problems canaffect the following modeling stages and be relevant to any model. The data check report pointsout id-like, duplicated, static, or highly correlated columns. Moreover, it points out the outliers,missing values, and the imbalance of the target. This way we can propose some simple heuristicdata preprocessing methods, yet more advanced users are able to fight the issues mentioned bystudying the data check report on their own.32.Data preparationPreparing the data for modeling is another crucial aspect after checking the data. It can bedone using a dedicated tool, but the forester package offers two general-purpose preprocessingmethods, basic and advanced. The main purpose of this function is to remove the need toprepare data manually differently for different types of models. The basic preparation consistsof the actions that are necessary for the package to work that is: the removal of static columns,binarization of the target variable, and imputation of the missing data using the MICE algorithm(Buuren and Groothuis-Oudshoorn, 2011). The advanced method additionally includes theremoval of id-like columns (features suspected of being id), removal of highly correlated columns(Spearman’s rank for the numerical features, and Crammer’s V rank for categorical features) aswell as feature selection with the BORUTA algorithm (Kursa and Rudnicki, 2010). Additionally,every model in the forester package requires a different data format which is also prepared insidethe main function.3.Model training and tuningTheforester package’s third and most important pillar is model training and tuning. Our solutionfocuses on the tree-based model family because of their high-quality performance for varioustabular data tasks. We’ve limited ourselves to 5 well-known engines with different strong andweak points, so they complement each other.We have included the basic decision tree from partykit package (Hothorn and Zeileis, 2015)as an extremely light engine, but mostly, we have focused on the ensemble models. The onlybagging representative is the random forest from the ranger package (Wright and Ziegler, 2017),which is reluctant to overfit.We have also considered three different boosting algorithms. The XGBoost model (Chen andGuestrin, 2016) is highly effective, but due to the need for one hot encoding, it suffers from theabundance of categorical features. However, the LightGBM model (Ke et al., 2017), which worksbest for medium and large datasets, has problems with the small ones. The last engine is theCatBoost (Prokhorenkova et al., 2018) which can achieve superior performance but requires theJava environment installed, which is a minor inconvenience.The models are trained with three approaches: using the default parameters, performing therandom search algorithm within the predefined parameter space, and running an advancedBayesian Optimization algorithm for fine-grained tuning. The first method is the baselinefor other models. With the second one, we can cheaply create multiple models and explorevarious parameter combinations. The best and most time-consuming method is the BayesianOptimization from the ParBayesianOptimization package. However, it is extremely useful forcomplex tasks.4.Model evaluationThe last pillar is the automatic evaluation of the trained models. The forester package assessesevery trained model by various metrics, such as accuracy, area under the receiver operatingcharacteristic curve (AUC), and F1 for the binary classification tasks, and Root Mean SquaredError (RMSE), Mean Absolute Error (MAE), or R2for the regression tasks. The results are laterpresented as a ranked list sorted by the outcomes (for example, ascending order for RMSE, anddescending for AUC). Moreover, the user can define their metrics and provide them for theevaluation phase.4forester featuresOne of the most important goals for the forester package is the convenience of use and helping theusers to focus more on analyzing the results instead of writing the code. To obtain such a user-friendly environment, the forester offers plenty of additional features useful for data scientists.44.1 Model explanationsIn recent years, interpretable machine learning has become a significant trend in machine learning.The tools providing interpretability such as DALEX (Biecek, 2018) or iml(Molnar et al., 2020)allow data scientists to explain how the models they create work, making it easier to detecttheir misbehavior. Models’ explainability also enhances trust in such tools, even in demandingenvironments like medical researchers. To support using explainable methods for the modelstrained by the forester , we have created a wrapper for the DALEX explainer compatible with ourpackage. This way, the user can easily create various explanations for the trained models.4.2 Saving the outcomesAnother crucial feature is the save function, which lets the user save the training output. Returnedforester object contains lots of information, such as preprocessed dataset, split datasets, split indexes,ranked lists for training, testing, and validation datasets, the predictions of the model, and muchmore. The abundance of objects makes it incredibly important to save the outcomes after thetime-consuming training process.4.3 Automated reportLast but not least, our solution offers an automatically generated report that helps users quicklyand easily analyze the training results. The main goal of this feature is to ensure that every useris able to easily assess the quality of the trained models. The report consists of basic informationabout the dataset, a data check report, a ranked list of the best ten models, and visualizationsconcerning model quality. An example report for the blood-transfusion-service-center dataset (fromthe OpenML-CC18 benchmark (Bischl et al., 2021)) is provided in Appendix G.The plots are divided into two groups; the first one compares the outcomes of different models,which helps to decide which model is the best. For example, guided by the radar chart comparisonplot, we can choose the model with slightly worse accuracy, but better AUC and F1 values.The second type of plots concentrates on the model with the best performance, and its mostprominent feature is providing a feature importance plot. This visualization lets us understandwhich variables are the most important for the model; thus, we can evaluate its correctness.It is worth noticing that the reports, mostly visualizations, are different for binary classificationand regression tasks as we measure their performance differently.5 User interface5.1 Training functionThe forester ’s main train() function runs the entire AutoML pipeline, including the data prepa-ration, model training, and evaluation. To keep the package as simple as possible, the functionrequires only the dataset and target column name (Listing 1); however, to keep the tool versatile,there are lots of custom parameters for more advanced users (Listing 2). With the latter option, theuser can specify the amount of Bayesian optimization iterations, the number of random searchevaluations, proportions of the train, test, and validation subsets, change the preprocessing methodsor even add their evaluation metric.train _ output←train ( data = lisbon , y = 'Price ')Listing 1: Training models with the forester package and default parameters.5train _ output←train ( data = lisbon ,y = 'Price ',verbose = TRUE ,engine = c( 'ranger ','xgboost ','decision _tree ','lightgbm ','catboost '),train _ test _ split = c(0.6 , 0.2 , 0.2) ,bayes _ iter = 10,random _ evals = 3,advanced _ preprocessing = FALSE ,metrics = 'auto ',sort _by = 'auto ',metric _ function = NULL ,metric _ function _ name = NULL ,metric _ function _ decreasing = TRUE ,best _ model _ number = 5)Listing 2: Training models with the forester package and custom parameters.5.2 Extensive featuresApart from the train() function, the user can utilize additional functions, which is helpful duringthe modeling process. The check_data() function (Listing 3) enables printing a data check reportoutside of the train() function. The save() function (Listing 4) lets us save the outcome of thetraining process, whereas the report() function (Listing 5) creates a training report. The lastextension is the explain() function (Listing 6), which creates a DALEX explainer that can be usedto generate multiple visualizations concerning the model interpretability with the DALEX package.check _ data ( data = `blood - transfusion - service - center `, y = 'Class ')Listing 3: Generating a data check report.save ( train _ output , name = 'train _ output .RData ')Listing 4: Saving the train output.report ( train _ output , 'report .pdf ')Listing 5: Generating a report from the train output.exp←explain ( models = train _ output $ best _ models [[1]] ,test _ data = train _ output $data ,y = train _ output $y,verbose = FALSE )Listing 6: Creating a model explainer, that lets us use functions from the DALEX package.6 PerformanceTo evaluate the performance of the package, we’ve decided to compare it to the H2O framework onthe binary classification tasks from the OpenML-CC18 benchmark (Bischl et al., 2021) and regressiontasks from OpenML (Vanschoren et al., 2013). Due to the limited computational resources, we havechosen a subset of 8 datasets for classification and 7 for regression described in Table 1 and Table2, respectively. The binary classification datasets consisted mainly of categorical variables andcontained many missing values, a significant obstacle for both solutions, whereas the regressiontasks had no missing values and mostly numeric or binary values.6During the experiment, we trained the forester package three times for each dataset with randomseeds provided for the data splitting function inside the forester . The same splits were later usedfor the H2O framework. A singular training iteration was executed for the decision tree, randomforest, LightGBM, and CatBoost engines with ten iterations of the Bayesian optimization and tenrandom search evaluations. For the regression task we’ve additionally added an XGboost engine.To ensure that both frameworks had the same amount of time, we have measured it for every forestertraining iteration, and provided it to the respective H2O AutoML runs. This H2O functionalitydidn’t work as supposed, and finally this framework had two times longer training time on average.This factor definitely improved the H2Os results, and we have to bear that in mind during theoutcomes comparison. For further details see Appendix E. Additionally, to ensure the same datasplit, we have used the indexes saved during the forester training. The source codes are included inAppendix A.The comparison of performance for both frameworks is presented in Figure 2 and Figure 3. Forthe raw results, as well as aggregated tabular ones, see Appendix C. As one can see, for thebinary classification task, the forester outperformed the H2O framework on five datasets: banknote-authentication ,blood-transfusion-service-centre ,credit-approval ,credit-g , and diabetes . The outcomesfor very simple datasets kr-vs-kp andbreast-w were similar, and H2O obtained better performancefor the phoneme data. For the regression tasks, the results were comparable to the H2O’s for mosttasks or slightly worse, as for the poldataset. The results show that the forester creates high-qualitymodels that are competitive with the existing solutions.However, our conclusions cannot be too far-fetched since we tested the package for only a few setsfor binary classification and regression tasks. We cannot say that the forester package’s predictivepower is better than H2O, but they clearly are competitive.Table 1: A subset of OpenML-CC18 benchmark datasets used during the evaluation process of theforester package, which are tabular data objects presenting the binary classification tasks.The features are mostly categorical, and they contain lots of missing values.Name Number of columns Number of rowskr-vs-kp 37 3196breast-w 10 699credit-approval 16 690credit-g 21 1000diabetes 9 768phoneme 6 5404banknote-authentication 5 1372blood-transfusion-service-center 5 748Table 2: A subset of OpenML datasets used during the evaluation process of the forester package,which are tabular data objects presenting the regression tasks. In this case there were nomissing values, and the features were mostly numerical or binary.Name Number of columns Number of rowsbank32nh 33 8192wine_quality 12 6497Mercedes_Benz_Greener_Manufacturing 378 4209kin8nm 9 8192pol 49 150002dplanes 11 40768elevators 19 165997banknoteauthenticationbloodtransfusionservicecenterbreastwcreditapprovalcreditgdiabeteskr vs kpphoneme0.5 0.6 0.7 0.8 0.9 1.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainAccuracyDatasetFramework forester for the binary classification taskPerformance comparison of forester and H2OH2OFigure 2: Performance comparison for forester and H2O frameworks for the datasets described inTable 1. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot clearlyshows us that the forester performs better than the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.2dplanesbank32nhelevatorskin8nmMercedes_Benz_Greener_Manufacturingpolwine_quality0.0 2.5 5.0 7.5 10.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainRMSEDatasetfor the regression taskPerformance comparison of forester and H2OFramework forester H2OFigure 3: Performance comparison for forester and H2O frameworks for the datasets described inTable 2. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot showsus that the forester performs comparably to the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.7 Limitations and Broader Impact StatementThe forester package has limitations in the availability of models. The library contains only tree-based models, but this family proves to be extremely versatile. Only binary classification andregression are available in the current version of the package. Preparing models for multi-criteriaclassification, cluster analysis, or survival analysis is currently impossible. However, these featurescan be easily implemented in the future. The package currently performs better with smallerdatasets; a large allocation of memory and time is needed for large and complex data.8One of the strongest points of the forester package is being incredibly easy to use, even if we donot have broad machine learning expertise. This approach, however, raises the risk that the modelstrained with the package will be of poor quality, for example, due to the training on a low-qualitydataset, or that the outcomes will be misunderstood or incorrectly interpreted by the inexperienceduser. The reporting module addresses all of these responsible machine learning concerns, whichinforms about possible issues with the data, measures the quality of the models, and provides theirexplanations.8 ConclusionsThis paper presents an R package for AutoML, creating models for regression and binary classifica-tion tasks conducted on tabular data. Our solution addresses the needs we have observed in AutoMLtools in various programming languages. The main goals of the package are to keep the packagestable and easy to use, to automate all the necessary steps inside the ML pipeline, and to provideresults that are easy to create, understand and allow for diagnostics of the models. To achieve theseresults, we have focused only on the best representatives from the family of tree-based modelsthat show superiority over other methods on tabular data. Furthermore, we provide additionalfunctions that allow the user to save the models, create explanations and create a report describingthe learning process and explaining the developed models. Experiments carried out tentativelyindicate that more predictive power is obtained using our solution than currently existing solutionsin R.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] We introduced the forester package and described itspotential. The Section 3 and Section 4 describe the various features.(b) Did you describe the limitations of your work? [Yes] See Section 7.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 7.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] We believe thatour paper conforms to the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We have notheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We have no theoreticalresults.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an in-structive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] See Appendix A.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] The most important results analyzed in this paper are presented or mentioned(via a link) in the Appendix C.9(c)Did you include scripts and commands that can be used to generate the figures and tables inyour paper based on the raw results of the code, data, and instructions given? [Yes] The codeis available on the package’s GitHub repository in the form of R Markdown notebook, seeAppendix A.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] The code is available on the package’s GitHubrepository in the form of R Markdown notebook, see Appendix A.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces,fixed hyperparameter settings, and how they were chosen)? [Yes] The training details arementioned in the main paper Section 6, as well as in the source code described in AppendixA.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] The methods were compared on the same train, test,and validation subsets, and the hyperparameter search space was the default one for eachAutoML framework.(g)Did you run ablation studies to assess the impact of different components of your approach?[No] The package at this point is pretty straightforward and doesn’t contain many com-ponents that could alter the outcomes. A possible ablation study could be applied to theadvanced preprocessing method, however, we did not have enough computational powerfor running the benchmark again.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] The modelswere compared by the same metrics for classification: accuracy, AUC and F1 and forregression: RMSE, MSE, R2i MAE.(i)Did you compare performance over time? [No] We did not have enough resources formultiple experiments executions.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]As described in the Section 6, we’ve performed three runs of the forester and H2O trainingwith the random seeds set for the train, test, and validation splits as the values 123, 2137,and 21.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not have error bars on the visualizations, but we provideexact values without any statistical aggregations.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We useda tabular benchmark consisting of 8 datasets describing the binary classification tasks fromthe OpenML-CC18 benchmark, as described in Section 6.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] See Appendix B.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] During the experiments, all computa-tions were conducted by the AutoML frameworks, and no additional tuning was included.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] A full list of the citedpapers/tools is described in the references.10(b)Did you mention the license of the assets? [Yes] Used assets, mostly R packages, aredescribes in the Appendix D.(c)Did you include any new assets either in the supplemental material or as a url? [Yes]The forester package is a new asset https://github.com/ModelOriented/forester .(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [Yes] See Section 6, we are using OpenML-CC18 and its data. We cited alldata sources according to the guidelines of datasets on OpenML (and in OpenML-CC18).(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] Our data does not contain personally identifiableinformation or offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not do research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not do research with human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not do research with human subjects.Acknowledgements. We would like to thank Adrianna Grudzień and Patryk Słowakiewicz for theirdevelopment work on the forester package. We also thank Katarzyna Woźnica, Hubert Baniecki,Mikołaj Spytek, and Mateusz Krzyziński for their valuable comments about the study.ReferencesBavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. (2022).Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255 .Biecek, P. (2018). DALEX: Explainers for Complex Predictive Models in R. Journal of MachineLearning Research , 19(84):1–5.Biecek, P. and Burzykowski, T. (2021). Explanatory Model Analysis . Chapman and Hall/CRC, NewYork.Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn,J. N., and Vanschoren, J. (2021). OpenML benchmarking suites. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) .Buuren, S. and Groothuis-Oudshoorn, C. (2011). MICE: Multivariate Imputation by ChainedEquations in R. Journal of Statistical Software , 45.Caruana, R., Karampatziakis, N., and Yessenalina, A. (2008). An empirical evaluation of supervisedlearning in high dimensions. Proceedings of the 25th International Conference on Machine Learning ,pages 96–103.Chen, T. and Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16,page 785–794.11Fararni, K. A., Nafis, F., Aghoutane, B., Yahyaouy, A., Riffi, J., and Sabri, A. (2021). Hybrid recom-mender system for tourism based on big data and AI: A conceptual framework. Big Data Miningand Analytics , 4(1):47–55.Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2022). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. In Advances in Neural Information Processing Systems ,volume 28.Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why do tree-based models still outperformdeep learning on typical tabular data? In Thirty-sixth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track .Hothorn, T. and Zeileis, A. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R.Journal of Machine Learning Research , 16(118):3905–3909.Jorge, C. C., Antonio, O. A. J., Hugo, G. M. V., and Hugo, O. P. D. (2022). Machine Learning forPersonal Credit Evaluation: A Systematic Review. WSEAS TRANSACTIONS ON COMPUTERRESEARCH , 10:62–73.Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: AHighly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information ProcessingSystems , volume 30.Kursa, M. B. and Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal ofStatistical Software , 36(11):1–13.Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff,L., and Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R.Journal of Open Source Software , 4(44):1903.LeDell, E., Gill, N., Aiello, S., Fu, A., Candel, A., Click, C., Kraljevic, T., Nykodym, T., Aboyoun, P.,Kurka, M., and Malohlava, M. (2022). h2o: R Interface for the ’H2O’ Scalable Machine LearningPlatform . R package version 3.38.0.1.Molnar, C., Casalicchio, G., and Bischl, B. (2020). Interpretable machine learning – a brief history,state-of-the-art and challenges. In ECML PKDD 2020 Workshops , pages 417–431.Olson, R. S., Bartley, N., Urbanowicz, R. J., and Moore, J. H. (2016). Evaluation of a Tree-basedPipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic andEvolutionary Computation Conference 2016 , GECCO ’16, pages 485–492.Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. (2018). CatBoost:unbiased boosting with categorical features. In Advances in Neural Information ProcessingSystems , volume 31.R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation forStatistical Computing, Vienna, Austria.Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L., and Zurada, J. (2010). Artificial Intelligenceand Soft Computing, Part II: 10th International Conference, ICAISC 2010 .12Shimizu, H. and Nakayama, K. I. (2020). Artificial intelligence in oncology. Cancer Science ,111(5):1452–1460.Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machinelearning algorithms. In Advances in Neural Information Processing Systems , volume 25.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Vanschoren, J. (2019). Meta-Learning , pages 35–61. Springer International Publishing, Cham.Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). Openml: networked science inmachine learning. SIGKDD Explorations , 15(2):49–60.Vilalta, R., Giraud-Carrier, C., Brazdil, P., and Soares, C. (2004). Using meta-learning to supportdata mining. International Journal of Computer Science Applications , 1.Wirth, R. and Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining.Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discoveryand Data Mining .Woźnica, K. and Biecek, P. (2022). Towards explainable meta-learning. In Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases: International Workshops of ECMLPKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I , pages 505–520.Wright, M. N. and Ziegler, A. (2017). ranger: A Fast Implementation of Random Forests for HighDimensional Data in C++ and R. Journal of Statistical Software , 77(1):1–17.A Source CodeThe source code of the experiments, prepared visualizations, and tables from Appendix C isavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments as the forester_benchmark.Rmd file. The markdown notebook file describesthe installation process, and it can be safely executed with the guidance of our remarks betweenthe code chunks.B ResourcesAs mentioned in the Section 6, our team was limited in computational power. The experiment wasconducted on our private PC with 32GB of RAM, CPU: 11th Gen Intel(R) Core(TM) i7-11700KF @3.60GHz (16 cores), and the GPU: NVIDIA GeForce RTX 3070 Ti, however as the forester is not yetimplemented to work on the GPU, only the CPU was used.C Raw resultsIn this section we provide information about the raw results mentioned in the Section 6 which wereused in the Figure 2. Raw results for train, test, and validation datasets are available in the GitHubrepository https://github.com/ModelOriented/forester/tree/main/misc/experiments/raw_training_results . In this section we offer the results aggregated as the mean values of the metricswhich are presented in the Table 3, Table 4, and Table 5 for the binary classification tasks. Thesetables also broaden our perspective by providing AUC and F1 values. The results for the regressiontasks are presented in the Table 6, Table 7, and Table 8. These tables also broaden our perspectiveby providing MSE, R2, and MAE values.13Table 3: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification training datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.929 0.923 0.905blood-transfusion-service-center forester 0.77 0.752 1blood-transfusion-service-center H2O 0.7 0.682 0.519breast-w forester 1 1 1breast-w H2O 0.998 0.998 0.997credit-approval forester 0.999 1 1credit-approval H2O 0.961 0.959 0.955credit-g forester 0.967 0.998 1credit-g H2O 0.906 0.855 0.938diabetes forester 0.991 0.999 1diabetes H2O 0.874 0.871 0.826kr-vs-kp forester 1 1 1kr-vs-kp H2O 0.999 0.999 0.965phoneme forester 1 1 1phoneme H2O 1 1 1Table 4: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification testing datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 0.995 0.995 1banknote-authentication H2O 0.933 0.927 0.915blood-transfusion-service-center forester 0.796 0.772 0.976blood-transfusion-service-center H2O 0.713 0.707 0.54breast-w forester 0.976 0.984 0.986breast-w H2O 0.971 0.97 0.959credit-approval forester 0.885 0.931 0.942credit-approval H2O 0.882 0.882 0.87credit-g forester 0.733 0.79 0.865credit-g H2O 0.743 0.64 0.829diabetes forester 0.768 0.823 0.799diabetes H2O 0.753 0.727 0.643kr-vs-kp forester 0.994 0.999 0.991kr-vs-kp H2O 0.991 0.991 0.991phoneme forester 0.909 0.96 0.867phoneme H2O 0.904 0.895 0.84214Table 5: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification validation datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.916 0.908 0.887blood-transfusion-service-center forester 0.775 0.773 0.833blood-transfusion-service-center H2O 0.675 0.68 0.509breast-w forester 0.938 0.968 0.956breast-w H2O 0.967 0.97 0.953credit-approval forester 0.855 0.908 0.939credit-approval H2O 0.867 0.862 0.842credit-g forester 0.705 0.788 1credit-g H2O 0.758 0.635 0.846diabetes forester 0.747 0.803 0.866diabetes H2O 0.755 0.735 0.656kr-vs-kp forester 0.99 0.999 0.99kr-vs-kp H2O 0.99 0.99 0.99phoneme forester 0.901 0.954 0.851phoneme H2O 0.9 0.896 0.839Table 6: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression training datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.697 0.5 0.974 0.4232dplanes H2O 0.984 0.969 0.95 0.785bank32nh forester 0.001 0 1 0.001bank32nh H2O 0.054 0.003 0.806 0.037elevators forester 0.001 0 0.978 0.001elevators H2O 0.002 0 0.942 0.001kin8nm forester 0.012 0 0.997 0.009kin8nm H2O 0.066 0.004 0.937 0.051Mercedes_Benz_Greener_Manufacturing forester 2.456 6.13 0.963 0.775Mercedes_Benz_Greener_Manufacturing H2O 7.806 61.115 0.625 4.935pol forester 1.139 1.483 0.999 0.699pol H2O 1.803 3.251 0.998 0.829wine_quality forester 0.071 0.005 0.993 0.031wine_quality H2O 0.161 0.027 0.965 0.12415Table 7: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression testing datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 1.003 1.007 0.948 0.8022dplanes H2O 1.004 1.008 0.948 0.802bank32nh forester 0.08 0.006 0.548 0.053bank32nh H2O 0.076 0.006 0.599 0.05elevators forester 0.002 0 0.884 0.002elevators H2O 0.002 0 0.911 0.001kin8nm forester 0.113 0.013 0.816 0.087kin8nm H2O 0.084 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 7.554 57.195 0.626 5.039Mercedes_Benz_Greener_Manufacturing H2O 7.583 57.598 0.623 5.222pol forester 4.739 22.508 0.987 2.242pol H2O 3.198 10.278 0.994 1.3wine_quality forester 0.614 0.377 0.505 0.451wine_quality H2O 0.604 0.365 0.521 0.43Table 8: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression validation datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.999 0.997 0.948 0.7992dplanes H2O 1 0.999 0.948 0.8bank32nh forester 0.082 0.007 0.544 0.053bank32nh H2O 0.078 0.006 0.591 0.052elevators forester 0.002 0 0.875 0.002elevators H2O 0.002 0 0.907 0.001kin8nm forester 0.111 0.012 0.822 0.085kin8nm H2O 0.083 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 8.464 73.039 0.559 5.261Mercedes_Benz_Greener_Manufacturing H2O 8.458 72.911 0.56 5.373pol forester 4.379 19.256 0.989 1.885pol H2O 3.01 9.087 0.995 1.213wine_quality forester 0.632 0.399 0.478 0.466wine_quality H2O 0.624 0.389 0.492 0.447D Used assetsIn this section we describe the packages used for both forester , and the experiments. The packagesoutside of the forester required for the experiments are listed in the Table 9. Additional requirementfor the catboost andH2O packages is installed Java. The packages required by the forester as wellas their versions used during the experiment are presented in the Table 10.16Table 9: The packages and their versions under which the experiments were executed and supplementalmaterials were created.package version licensexlsx 0.6.5 GPL-3stringr 1.5.0 MITggbeeswarm 0.6.0 GPL (>= 2)dplyr 1.0.10 MITggplot2 3.4.0 MITtictoc 1.1 Apache License (== 2.0)H2O 3.38.0.1 Apache License (== 2.0)forester 1.2.1 GPL-3OpenML 1.12 BSD_3_clauseTable 10: The forester package’s dependencies and their versions used during the experiments.package version licenceBoruta 7.0.0 GPL (>= 2)catboost 1.1.1 Apache License (== 2.0)crayon 1.5.2 MITDALEX 2.4.2 GPLdata.table 1.14.2 MPL-2.0ggplot2 3.4.0 MITggradar 0.2 GPLggrepel 0.9.3 GPL-3knitr 1.40 GPLlightgbm 3.3.2 MITmice 3.14.0 GPL-2 | GPL-3mltools 0.3.5 MITParBayesianOptimization 1.2.4 GPL-2partykit 1.2-16 GPL-2 | GPL-3pROC 1.18.0 GPL (>= 3)ranger 0.14.1 GPL-3rcompanion 2.4.18 GPL-3rmarkdown 2.16 GPL-3splitTools 0.3.2 GPL (>= 2)testthat 3.1.6 MITtibble 3.1.8 MITtinytex 0.43 MITvarhandle 2.0.5 GPL (>= 2)xgboost 1.6.0.1 Apache License (== 2.0)stats 4.1.2 Part of R 4.1.217E Execution times comparisonIn this section we briefly explore the times needed for every experiment execution for both frame-works. The results presented in Table 11, and Table 12 show that final execution times differ, despitesetting exactly the same times for H2O experiment as the forester had. Our empirical results showthat the H2O runs lasted two times longer on average than the forester , which puts a differentlight on the comparison of the frameworks performance. Raw results needed for these tables areavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments/execution_times .Table 11: The comparison of mean execution times in seconds for the forester andH2O for binaryclassification experiments.task_name forester H2O difference relative differencebanknote-authentication 818.33 2521.33 -1703 0.28blood-transfusion-service-center 155.67 555.67 -400 0.26breast-w 451.33 797.33 -346 0.57credit-approval 805 1513 -708 0.53credit-g 2453 4234 -1781 0.58diabetes 1645.67 2643.67 -998 0.62kr-vs-kp 451.33 806.67 -355.33 0.57phoneme 2748.33 3695.33 -947 0.67Table 12: The comparison of mean execution times in seconds for the forester andH2O for regressionexperiments.task_name forester H2O difference relative difference2dplanes 401 1050.67 -649.67 0.38bank32nh 708.67 1214.67 -506 0.58elevators 720.33 1435.33 -715 0.5kin8nm 544.67 1564 -1019.33 0.35Mercedes_Benz_Greener_Manufacturing 848 1371.67 -523.67 0.61pol 756 1548.33 -792.33 0.49wine_quality 1317.33 2130 -812.67 0.63F Package comparisonWe have prepared a notebook showing the differences between the packages described in therelated work section. The document includes a comparison of package installation, a descriptionof available preprocessing, variable selection options, and model tuning. In addition, visual-izations, methods of explainable machine learning, report preparation, and reference to avail-able package documentation are described. We do not give a final assessment of the best pack-age because it could be subjective, but we expose the reader to criticism. Notebook is avail-able in the GitHub repository https://github.com/ModelOriented/forester/blob/main/misc/experiments/framework_comparison.Rmd .18Forester reportversion 1.2.12023-05-20 01:36:36This report contains details about the best trained model, table with metrics for every trained model, scatterplot for chosen metric and info about used data.The best modelsThis is the binary_clf task.The best model is: xgboost_RS_5 .The names of the models were created by a pattern Engine_TuningMethod_Id , where:•Engine describes the engine used for the training (random_forest, xgboost, decision_tree, lightgbm,catboost),•TuningMethod describes how the model was tuned (basic for basic parameters, RS for random search,bayes for Bayesian optimization),•Id for separating the random search parameters sets.More details about the best model are present at the end of the report.no. name accuracy auc f113 xgboost_RS_5 0.7919 0.8088 0.27917 ranger_RS_4 0.7785 0.6965 0.153818 lightgbm_RS_5 0.7785 0.7361 0.42112 xgboost_model 0.7718 0.7090 0.413814 lightgbm_RS_1 0.7718 0.7578 0.37044 ranger_RS_1 0.7651 0.7930 NaN6 ranger_RS_3 0.7651 0.7228 NaN10 xgboost_RS_2 0.7651 0.7801 NaN11 xgboost_RS_3 0.7651 0.7367 NaN16 lightgbm_RS_3 0.7651 0.7690 NaN21 lightgbm_bayes 0.7651 0.7340 0.36368 ranger_RS_5 0.7584 0.7579 0.052612 xgboost_RS_4 0.7517 0.6609 0.372919 ranger_bayes 0.7517 0.7333 0.244920 xgboost_bayes 0.7517 0.7409 0.24491 ranger_model 0.7450 0.7063 0.32143 lightgbm_model 0.7450 0.6842 0.38719 xgboost_RS_1 0.7450 0.6619 0.366715 lightgbm_RS_2 0.7181 0.6058 0.382417 lightgbm_RS_4 0.7181 0.6058 0.38241G Report exampleno. name accuracy auc f15 ranger_RS_2 0.7114 0.6929 0.2712Plots for all modelsxgboost_modellightgbm_RS_1xgboost_RS_5ranger_RS_4lightgbm_RS_500.51Metricaccuracyaucf1Model comparisonPlots for the best model - xgboost_RS_50.000.250.500.751.000.00 0.25 0.50 0.75 1.00specificitysensitivityROC Curve (AUC = 0.8088)6229112010 1TargetPredictionConfusion Matrix2Feature Importance for the best model - xgboost_RS_5xgb.Booster0.880 0.882 0.884 0.886 0.888V4V3V2V1Root mean square error (RMSE) loss after permutationscreated for the xgb.Booster modelFeature ImportanceDetails about data——————– CHECK DATA REPORT ——————–The dataset has 748 observations and 5 columns which names are:V1; V2; V3; V4; Class;With the target value described by a column: Class.No static columns.No duplicate columns.No target values are missing.No predictor values are missing.No issues with dimensionality.Strongly correlated, by Spearman rank, pairs of numerical values are:V2 - V3: 1;These observations migth be outliers due to their numerical columns values:1 10 116 342 496 497 498 499 5 500 501 503 504 505 506 518 529 747 748 ;Dataset is unbalanced with: 3.202247 proportion with 1 being a dominating class.3Columns names suggest that none of them are IDs.Columns data suggest that none of them are IDs.——————– CHECK DATA REPORT END ——————–The best model details------------ Xgboost model ------------Parametersniter: 20evaluation_log:iter : train_auc1 :2 :3 :4 :5 :6 :7 :8 :9 :10 :11 :12 :13 :14 :15 :16 :17 :18 :19 :20 :4
AdrEULsJHe
Q3DWpGoX7PD
automl.cc/AutoML/2023/ABCD_Track
2023
forester: A Novel Approach to Accessible and Interpretable AutoML for Tree-Based Modeling
["Anna Kozak", "Hubert Ruczy\u0144ski"]
The majority of AutoML solutions are developed in Python. However, a large percentage of data scientists are associated with the R language. Unfortunately, there are limited R solutions available with high entry level which means they are not accessible to everyone. To fill this gap, we present the $\textit{forester}$ package, which offers ease of use regardless of the user's proficiency in the area of machine learning. The $\textit{forester}$ package is an open-source AutoML package implemented in R designed for training high-quality tree-based models on tabular data. It supports regression and binary classification tasks. A single line of code allows the use of unprocessed datasets, informs about potential issues concerning them, and handles feature engineering automatically. Moreover, hyperparameter tuning is performed by Bayesian optimization, which provides high-quality outcomes. The results are later served as a ranked list of models. Finally, the $\textit{forester}$ package offers a vast training report, including the ranked list, a comparison of trained models, and explanations for the best one.
["machine learning", "automated machine learning", "tree-based models", "automated reporting"]
forester: A Novel Approach to Accessible and InterpretableAutoML for Tree-Based ModelingAnna Kozak1Hubert Ruczyński11Warsaw University of TechnologyAbstract The majority of AutoML solutions are developed in Python. However, a large percentageof data scientists are associated with the R language. Unfortunately, there are limitedR solutions available with high entry level which means they are not accessible to everyone.To fill this gap, we present the forester package, which offers ease of use regardless of theuser’s proficiency in the area of machine learning.The forester package is an open-source AutoML package implemented in R designed fortraining high-quality tree-based models on tabular data. It supports regression and binaryclassification tasks. A single line of code allows the use of unprocessed datasets, informsabout potential issues concerning them, and handles feature engineering automatically.Moreover, hyperparameter tuning is performed by Bayesian optimization, which provideshigh-quality outcomes. The results are later served as a ranked list of models. Finally, theforester package offers a vast training report, including the ranked list, a comparison oftrained models, and explanations for the best one.1 IntroductionMachine learning is being used more and more in the world around us. Every day, models arecreated to assist doctors (Shimizu and Nakayama, 2020), financiers (Jorge et al., 2022), or tourists(Fararni et al., 2021). With the increasing demand for model building, research is being conductedon automatically developing tools to build artificial intelligence based solutions.Many types of models are used in machine learning, such as decision rules (scoring card model) tocomplex neural network structures modeling natural language (large language models, for example,ChatGPT (Bavarian et al., 2022)). Viewing machine learning in terms of tabular data, we havea wide range of models available, from decision trees and linear or logistic regression to randomforests, SVM, or neural networks. However, tree-based models are the most widely used; the mainreason behind this is their high predictive efficiency. A simple decision tree model gives relativelysatisfactory results, but using multiple trees to create a random forest allows significantly higherpredictive power (Caruana et al., 2008; Grinsztajn et al., 2022).Automating the process to build machine learning models can include many different components.For example, the CRoss Industry Standard Process for Data Mining (CRISP-DM) (Wirth and Hipp,2000) is the most common methodology for data mining, analytics, and data science projects. Butthe basic framework of an automatic machine learning system is the preparation of models basedon data entered by the user. This process can be extended in various directions; for example,a preliminary analysis of the given data can be taken care of to look for potential data errorsor outlier observations, i.e. exploratory data analysis. Another essential element may be thesearch space of the model’s hyperparameters. Optimization of hyperparameters can be based onsimple methods such as a predefined parameter grid or random search. Another way to selecthyperparameters is to use Bayesian optimization (Snoek et al., 2012) or meta-learning (Vilalta et al.,2004; Vanschoren, 2019; Woźnica and Biecek, 2022). After tuning the models with hyperparameteroptimization, the next step we can add is to analyze the results in the form of a leaderboardAutoML 2023 Workshop Track ©2023 the authors, released under CC BY 4.0or visualization. By extending with explanatory methods (Biecek and Burzykowski, 2021) andreporting, the entire machine learning process can be finalized.Automating the process of machine learning allows access to data science tools for people who arestarting in data analysis and modeling. At the same time, it is an improvement and speeds up thework of experienced data scientists, who can make at least baseline models using a single line ofcode.In this paper, we present the AutoML package written for the R (R Core Team, 2022) to createmodels for regression and binary classification tasks on tabular data. The main goals of the packageare: making the package easy to use, fully automating all the necessary steps inside the ML pipeline,and providing results that are easy to create, understand and allow diagnostics of the models.The availability of responsible machine learning methods in the solution allows the results ofcomplex models to be interpreted. Changing the focus from obtaining the best possible outcomesto the interpretability of the results is a novelty for the AutoML tools. The implementation of theforester package can be found in our GitHub repository1. The software is open source and containscomprehensive documentation with examples of use.2 Related worksPackages for AutoML are prevalent in Python. The first AutoML solutions like Auto-WEKA(Thornton et al., 2013), was followed by Auto-Sklearn (Feurer et al., 2015, 2022) and TPOT (Tree-Based Pipeline Optimization Tool) (Olson et al., 2016) which was one of the very first AutoMLmethods and open-source software packages developed for the data science community in Python.But in R, there are few approaches. One of them is the H2O package (LeDell et al., 2022). It isan open-source library that is an in-memory, distributed, fast, and scalable machine learningand predictive analytics platform that creates a ranked list of models easily exported for use ina production environment. The authors have created an easy-to-use interface that automates thetraining of multiple candidate models. H2O’s AutoML is also designed for more advanced users byproviding a simple wrapper function that performs many modeling tasks. H2O’s AutoML processautomatically trains models and tunes them at user-specified times. To better understand the qualityof models in H2O, we can rely on metrics such as R2and mean square error (MSE). For comparison,in the forester package, we can compare models using the most commonly used metrics or evendefine a new custom metric. What particularly distinguishes the forester package from H2O isthe preprocessing. In the latter’s case, it only includes target encoding and is in the experimentalstage. In the forester package, we have more accurate and extensive preprocessing. In addition,H2O always requires Java to work, so the user must also install it.The second widely-used framework is the mlr3 package (Lang et al., 2019) which provides a frame-work for classification, regression, survival analysis, and other ML tasks such as cluster analysis.It provides the ability to perform hyperparameter tuning and feature selection. The package iswell-documented, contains many functions and models, and provides many capabilities. However,it is different from a typical package for AutoML, as creating models requires knowledge of how todo it and some time to assemble such a model. It also has its drawbacks, such as the need for morepreprocessing, which would help to use it more easily, for example, the XGBoost model, whichhas to have only numerical data without factors. There is also no way to divide the collection intotraining, testing, and validation subsets. The mlr3 package provides functionality that builds onthe basic components of machine learning. It can be extended to include preprocessing, pipelining,visualization, additional learners, additional task types, and more. To create these properties, weneed to install many other libraries. In the forester package, we provide these components at once,and with a single function, we can perform preprocessing, prepare visualization of the results1https://github.com/ModelOriented/forester2Model training and tuningData checkData preparationDecisionmakingforesterfeaturesModel evaluationMissing values,Correlated features, Irrelevant columnsData splitting,Preprocessing,Data imputationDefault parameters,Random search,Bayesian OptimizationRanked list,Customizable metricssave(),report(),explain()(1)(2)(3)(4)Raw dataFigure 1: A diagram presenting the forester pipeline. The forester analyses poor-quality data with thein-built data check (1), which points to possible issues, and later data preparation (2) handlesthem during the preprocessing. In the next step, the models are trained with default andrandom searched parameters and tuned with a Bayesian optimization algorithm (3). In theend, trained models are evaluated (4) and presented as a ranked list. In addition, the packageoffers the user additional features.and generate a report. A more detailed comparison of the forester package with H2O andmlr3 ispresented in Appendix F.3forester AutoMLTheforester is an AutoML package automating the machine learning pipeline, starting from the datapreparation, through model training, to the interpretability of the results. This way, we minimize theuser’s time performing basic and often repetitive activities related to the machine-learning process.Despite the high automation of the pipeline shown in Figure 1, we expose multiple parameterswhich advanced data scientists can use to customize the model creation. The whole package relieson the four pillars described in this section.1.Data checkThe first one, called data check, concerns a data preparation phase. Data preparation is a crucialpart of the modeling process (Rutkowski et al., 2010), so we cannot blindly assume a single wayof transforming the data for all cases. Appropriate data preprocessing is crucial to buildinga model with a small error rate. To face that issue, we introduce a data check report summarizingthe dataset with some basic information and pointing out possible problems. Data problems canaffect the following modeling stages and be relevant to any model. The data check report pointsout id-like, duplicated, static, or highly correlated columns. Moreover, it points out the outliers,missing values, and the imbalance of the target. This way we can propose some simple heuristicdata preprocessing methods, yet more advanced users are able to fight the issues mentioned bystudying the data check report on their own.32.Data preparationPreparing the data for modeling is another crucial aspect after checking the data. It can bedone using a dedicated tool, but the forester package offers two general-purpose preprocessingmethods, basic and advanced. The main purpose of this function is to remove the need toprepare data manually differently for different types of models. The basic preparation consistsof the actions that are necessary for the package to work that is: the removal of static columns,binarization of the target variable, and imputation of the missing data using the MICE algorithm(Buuren and Groothuis-Oudshoorn, 2011). The advanced method additionally includes theremoval of id-like columns (features suspected of being id), removal of highly correlated columns(Spearman’s rank for the numerical features, and Crammer’s V rank for categorical features) aswell as feature selection with the BORUTA algorithm (Kursa and Rudnicki, 2010). Additionally,every model in the forester package requires a different data format which is also prepared insidethe main function.3.Model training and tuningTheforester package’s third and most important pillar is model training and tuning. Our solutionfocuses on the tree-based model family because of their high-quality performance for varioustabular data tasks. We’ve limited ourselves to 5 well-known engines with different strong andweak points, so they complement each other.We have included the basic decision tree from partykit package (Hothorn and Zeileis, 2015)as an extremely light engine, but mostly, we have focused on the ensemble models. The onlybagging representative is the random forest from the ranger package (Wright and Ziegler, 2017),which is reluctant to overfit.We have also considered three different boosting algorithms. The XGBoost model (Chen andGuestrin, 2016) is highly effective, but due to the need for one hot encoding, it suffers from theabundance of categorical features. However, the LightGBM model (Ke et al., 2017), which worksbest for medium and large datasets, has problems with the small ones. The last engine is theCatBoost (Prokhorenkova et al., 2018) which can achieve superior performance but requires theJava environment installed, which is a minor inconvenience.The models are trained with three approaches: using the default parameters, performing therandom search algorithm within the predefined parameter space, and running an advancedBayesian Optimization algorithm for fine-grained tuning. The first method is the baselinefor other models. With the second one, we can cheaply create multiple models and explorevarious parameter combinations. The best and most time-consuming method is the BayesianOptimization from the ParBayesianOptimization package. However, it is extremely useful forcomplex tasks.4.Model evaluationThe last pillar is the automatic evaluation of the trained models. The forester package assessesevery trained model by various metrics, such as accuracy, area under the receiver operatingcharacteristic curve (AUC), and F1 for the binary classification tasks, and Root Mean SquaredError (RMSE), Mean Absolute Error (MAE), or R2for the regression tasks. The results are laterpresented as a ranked list sorted by the outcomes (for example, ascending order for RMSE, anddescending for AUC). Moreover, the user can define their metrics and provide them for theevaluation phase.4forester featuresOne of the most important goals for the forester package is the convenience of use and helping theusers to focus more on analyzing the results instead of writing the code. To obtain such a user-friendly environment, the forester offers plenty of additional features useful for data scientists.44.1 Model explanationsIn recent years, interpretable machine learning has become a significant trend in machine learning.The tools providing interpretability such as DALEX (Biecek, 2018) or iml(Molnar et al., 2020)allow data scientists to explain how the models they create work, making it easier to detecttheir misbehavior. Models’ explainability also enhances trust in such tools, even in demandingenvironments like medical researchers. To support using explainable methods for the modelstrained by the forester , we have created a wrapper for the DALEX explainer compatible with ourpackage. This way, the user can easily create various explanations for the trained models.4.2 Saving the outcomesAnother crucial feature is the save function, which lets the user save the training output. Returnedforester object contains lots of information, such as preprocessed dataset, split datasets, split indexes,ranked lists for training, testing, and validation datasets, the predictions of the model, and muchmore. The abundance of objects makes it incredibly important to save the outcomes after thetime-consuming training process.4.3 Automated reportLast but not least, our solution offers an automatically generated report that helps users quicklyand easily analyze the training results. The main goal of this feature is to ensure that every useris able to easily assess the quality of the trained models. The report consists of basic informationabout the dataset, a data check report, a ranked list of the best ten models, and visualizationsconcerning model quality. An example report for the blood-transfusion-service-center dataset (fromthe OpenML-CC18 benchmark (Bischl et al., 2021)) is provided in Appendix G.The plots are divided into two groups; the first one compares the outcomes of different models,which helps to decide which model is the best. For example, guided by the radar chart comparisonplot, we can choose the model with slightly worse accuracy, but better AUC and F1 values.The second type of plots concentrates on the model with the best performance, and its mostprominent feature is providing a feature importance plot. This visualization lets us understandwhich variables are the most important for the model; thus, we can evaluate its correctness.It is worth noticing that the reports, mostly visualizations, are different for binary classificationand regression tasks as we measure their performance differently.5 User interface5.1 Training functionThe forester ’s main train() function runs the entire AutoML pipeline, including the data prepa-ration, model training, and evaluation. To keep the package as simple as possible, the functionrequires only the dataset and target column name (Listing 1); however, to keep the tool versatile,there are lots of custom parameters for more advanced users (Listing 2). With the latter option, theuser can specify the amount of Bayesian optimization iterations, the number of random searchevaluations, proportions of the train, test, and validation subsets, change the preprocessing methodsor even add their evaluation metric.train _ output←train ( data = lisbon , y = 'Price ')Listing 1: Training models with the forester package and default parameters.5train _ output←train ( data = lisbon ,y = 'Price ',verbose = TRUE ,engine = c( 'ranger ','xgboost ','decision _tree ','lightgbm ','catboost '),train _ test _ split = c(0.6 , 0.2 , 0.2) ,bayes _ iter = 10,random _ evals = 3,advanced _ preprocessing = FALSE ,metrics = 'auto ',sort _by = 'auto ',metric _ function = NULL ,metric _ function _ name = NULL ,metric _ function _ decreasing = TRUE ,best _ model _ number = 5)Listing 2: Training models with the forester package and custom parameters.5.2 Extensive featuresApart from the train() function, the user can utilize additional functions, which is helpful duringthe modeling process. The check_data() function (Listing 3) enables printing a data check reportoutside of the train() function. The save() function (Listing 4) lets us save the outcome of thetraining process, whereas the report() function (Listing 5) creates a training report. The lastextension is the explain() function (Listing 6), which creates a DALEX explainer that can be usedto generate multiple visualizations concerning the model interpretability with the DALEX package.check _ data ( data = `blood - transfusion - service - center `, y = 'Class ')Listing 3: Generating a data check report.save ( train _ output , name = 'train _ output .RData ')Listing 4: Saving the train output.report ( train _ output , 'report .pdf ')Listing 5: Generating a report from the train output.exp←explain ( models = train _ output $ best _ models [[1]] ,test _ data = train _ output $data ,y = train _ output $y,verbose = FALSE )Listing 6: Creating a model explainer, that lets us use functions from the DALEX package.6 PerformanceTo evaluate the performance of the package, we’ve decided to compare it to the H2O framework onthe binary classification tasks from the OpenML-CC18 benchmark (Bischl et al., 2021) and regressiontasks from OpenML (Vanschoren et al., 2013). Due to the limited computational resources, we havechosen a subset of 8 datasets for classification and 7 for regression described in Table 1 and Table2, respectively. The binary classification datasets consisted mainly of categorical variables andcontained many missing values, a significant obstacle for both solutions, whereas the regressiontasks had no missing values and mostly numeric or binary values.6During the experiment, we trained the forester package three times for each dataset with randomseeds provided for the data splitting function inside the forester . The same splits were later usedfor the H2O framework. A singular training iteration was executed for the decision tree, randomforest, LightGBM, and CatBoost engines with ten iterations of the Bayesian optimization and tenrandom search evaluations. For the regression task we’ve additionally added an XGboost engine.To ensure that both frameworks had the same amount of time, we have measured it for every forestertraining iteration, and provided it to the respective H2O AutoML runs. This H2O functionalitydidn’t work as supposed, and finally this framework had two times longer training time on average.This factor definitely improved the H2Os results, and we have to bear that in mind during theoutcomes comparison. For further details see Appendix E. Additionally, to ensure the same datasplit, we have used the indexes saved during the forester training. The source codes are included inAppendix A.The comparison of performance for both frameworks is presented in Figure 2 and Figure 3. Forthe raw results, as well as aggregated tabular ones, see Appendix C. As one can see, for thebinary classification task, the forester outperformed the H2O framework on five datasets: banknote-authentication ,blood-transfusion-service-centre ,credit-approval ,credit-g , and diabetes . The outcomesfor very simple datasets kr-vs-kp andbreast-w were similar, and H2O obtained better performancefor the phoneme data. For the regression tasks, the results were comparable to the H2O’s for mosttasks or slightly worse, as for the poldataset. The results show that the forester creates high-qualitymodels that are competitive with the existing solutions.However, our conclusions cannot be too far-fetched since we tested the package for only a few setsfor binary classification and regression tasks. We cannot say that the forester package’s predictivepower is better than H2O, but they clearly are competitive.Table 1: A subset of OpenML-CC18 benchmark datasets used during the evaluation process of theforester package, which are tabular data objects presenting the binary classification tasks.The features are mostly categorical, and they contain lots of missing values.Name Number of columns Number of rowskr-vs-kp 37 3196breast-w 10 699credit-approval 16 690credit-g 21 1000diabetes 9 768phoneme 6 5404banknote-authentication 5 1372blood-transfusion-service-center 5 748Table 2: A subset of OpenML datasets used during the evaluation process of the forester package,which are tabular data objects presenting the regression tasks. In this case there were nomissing values, and the features were mostly numerical or binary.Name Number of columns Number of rowsbank32nh 33 8192wine_quality 12 6497Mercedes_Benz_Greener_Manufacturing 378 4209kin8nm 9 8192pol 49 150002dplanes 11 40768elevators 19 165997banknoteauthenticationbloodtransfusionservicecenterbreastwcreditapprovalcreditgdiabeteskr vs kpphoneme0.5 0.6 0.7 0.8 0.9 1.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainAccuracyDatasetFramework forester for the binary classification taskPerformance comparison of forester and H2OH2OFigure 2: Performance comparison for forester and H2O frameworks for the datasets described inTable 1. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot clearlyshows us that the forester performs better than the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.2dplanesbank32nhelevatorskin8nmMercedes_Benz_Greener_Manufacturingpolwine_quality0.0 2.5 5.0 7.5 10.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainRMSEDatasetfor the regression taskPerformance comparison of forester and H2OFramework forester H2OFigure 3: Performance comparison for forester and H2O frameworks for the datasets described inTable 2. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot showsus that the forester performs comparably to the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.7 Limitations and Broader Impact StatementThe forester package has limitations in the availability of models. The library contains only tree-based models, but this family proves to be extremely versatile. Only binary classification andregression are available in the current version of the package. Preparing models for multi-criteriaclassification, cluster analysis, or survival analysis is currently impossible. However, these featurescan be easily implemented in the future. The package currently performs better with smallerdatasets; a large allocation of memory and time is needed for large and complex data.8One of the strongest points of the forester package is being incredibly easy to use, even if we donot have broad machine learning expertise. This approach, however, raises the risk that the modelstrained with the package will be of poor quality, for example, due to the training on a low-qualitydataset, or that the outcomes will be misunderstood or incorrectly interpreted by the inexperienceduser. The reporting module addresses all of these responsible machine learning concerns, whichinforms about possible issues with the data, measures the quality of the models, and provides theirexplanations.8 ConclusionsThis paper presents an R package for AutoML, creating models for regression and binary classifica-tion tasks conducted on tabular data. Our solution addresses the needs we have observed in AutoMLtools in various programming languages. The main goals of the package are to keep the packagestable and easy to use, to automate all the necessary steps inside the ML pipeline, and to provideresults that are easy to create, understand and allow for diagnostics of the models. To achieve theseresults, we have focused only on the best representatives from the family of tree-based modelsthat show superiority over other methods on tabular data. Furthermore, we provide additionalfunctions that allow the user to save the models, create explanations and create a report describingthe learning process and explaining the developed models. Experiments carried out tentativelyindicate that more predictive power is obtained using our solution than currently existing solutionsin R.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] We introduced the forester package and described itspotential. The Section 3 and Section 4 describe the various features.(b) Did you describe the limitations of your work? [Yes] See Section 7.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 7.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] We believe thatour paper conforms to the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We have notheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We have no theoreticalresults.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an in-structive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] See Appendix A.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] The most important results analyzed in this paper are presented or mentioned(via a link) in the Appendix C.9(c)Did you include scripts and commands that can be used to generate the figures and tables inyour paper based on the raw results of the code, data, and instructions given? [Yes] The codeis available on the package’s GitHub repository in the form of R Markdown notebook, seeAppendix A.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] The code is available on the package’s GitHubrepository in the form of R Markdown notebook, see Appendix A.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces,fixed hyperparameter settings, and how they were chosen)? [Yes] The training details arementioned in the main paper Section 6, as well as in the source code described in AppendixA.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] The methods were compared on the same train, test,and validation subsets, and the hyperparameter search space was the default one for eachAutoML framework.(g)Did you run ablation studies to assess the impact of different components of your approach?[No] The package at this point is pretty straightforward and doesn’t contain many com-ponents that could alter the outcomes. A possible ablation study could be applied to theadvanced preprocessing method, however, we did not have enough computational powerfor running the benchmark again.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] The modelswere compared by the same metrics for classification: accuracy, AUC and F1 and forregression: RMSE, MSE, R2i MAE.(i)Did you compare performance over time? [No] We did not have enough resources formultiple experiments executions.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]As described in the Section 6, we’ve performed three runs of the forester and H2O trainingwith the random seeds set for the train, test, and validation splits as the values 123, 2137,and 21.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not have error bars on the visualizations, but we provideexact values without any statistical aggregations.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We useda tabular benchmark consisting of 8 datasets describing the binary classification tasks fromthe OpenML-CC18 benchmark, as described in Section 6.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] See Appendix B.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] During the experiments, all computa-tions were conducted by the AutoML frameworks, and no additional tuning was included.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] A full list of the citedpapers/tools is described in the references.10(b)Did you mention the license of the assets? [Yes] Used assets, mostly R packages, aredescribes in the Appendix D.(c)Did you include any new assets either in the supplemental material or as a url? [Yes]The forester package is a new asset https://github.com/ModelOriented/forester .(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [Yes] See Section 6, we are using OpenML-CC18 and its data. We cited alldata sources according to the guidelines of datasets on OpenML (and in OpenML-CC18).(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] Our data does not contain personally identifiableinformation or offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not do research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not do research with human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not do research with human subjects.Acknowledgements. We would like to thank Adrianna Grudzień and Patryk Słowakiewicz for theirdevelopment work on the forester package. We also thank Katarzyna Woźnica, Hubert Baniecki,Mikołaj Spytek, and Mateusz Krzyziński for their valuable comments about the study.ReferencesBavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. (2022).Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255 .Biecek, P. (2018). DALEX: Explainers for Complex Predictive Models in R. Journal of MachineLearning Research , 19(84):1–5.Biecek, P. and Burzykowski, T. (2021). Explanatory Model Analysis . Chapman and Hall/CRC, NewYork.Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn,J. N., and Vanschoren, J. (2021). OpenML benchmarking suites. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) .Buuren, S. and Groothuis-Oudshoorn, C. (2011). MICE: Multivariate Imputation by ChainedEquations in R. Journal of Statistical Software , 45.Caruana, R., Karampatziakis, N., and Yessenalina, A. (2008). An empirical evaluation of supervisedlearning in high dimensions. Proceedings of the 25th International Conference on Machine Learning ,pages 96–103.Chen, T. and Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16,page 785–794.11Fararni, K. A., Nafis, F., Aghoutane, B., Yahyaouy, A., Riffi, J., and Sabri, A. (2021). Hybrid recom-mender system for tourism based on big data and AI: A conceptual framework. Big Data Miningand Analytics , 4(1):47–55.Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2022). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. In Advances in Neural Information Processing Systems ,volume 28.Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why do tree-based models still outperformdeep learning on typical tabular data? In Thirty-sixth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track .Hothorn, T. and Zeileis, A. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R.Journal of Machine Learning Research , 16(118):3905–3909.Jorge, C. C., Antonio, O. A. J., Hugo, G. M. V., and Hugo, O. P. D. (2022). Machine Learning forPersonal Credit Evaluation: A Systematic Review. WSEAS TRANSACTIONS ON COMPUTERRESEARCH , 10:62–73.Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: AHighly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information ProcessingSystems , volume 30.Kursa, M. B. and Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal ofStatistical Software , 36(11):1–13.Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff,L., and Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R.Journal of Open Source Software , 4(44):1903.LeDell, E., Gill, N., Aiello, S., Fu, A., Candel, A., Click, C., Kraljevic, T., Nykodym, T., Aboyoun, P.,Kurka, M., and Malohlava, M. (2022). h2o: R Interface for the ’H2O’ Scalable Machine LearningPlatform . R package version 3.38.0.1.Molnar, C., Casalicchio, G., and Bischl, B. (2020). Interpretable machine learning – a brief history,state-of-the-art and challenges. In ECML PKDD 2020 Workshops , pages 417–431.Olson, R. S., Bartley, N., Urbanowicz, R. J., and Moore, J. H. (2016). Evaluation of a Tree-basedPipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic andEvolutionary Computation Conference 2016 , GECCO ’16, pages 485–492.Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. (2018). CatBoost:unbiased boosting with categorical features. In Advances in Neural Information ProcessingSystems , volume 31.R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation forStatistical Computing, Vienna, Austria.Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L., and Zurada, J. (2010). Artificial Intelligenceand Soft Computing, Part II: 10th International Conference, ICAISC 2010 .12Shimizu, H. and Nakayama, K. I. (2020). Artificial intelligence in oncology. Cancer Science ,111(5):1452–1460.Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machinelearning algorithms. In Advances in Neural Information Processing Systems , volume 25.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Vanschoren, J. (2019). Meta-Learning , pages 35–61. Springer International Publishing, Cham.Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). Openml: networked science inmachine learning. SIGKDD Explorations , 15(2):49–60.Vilalta, R., Giraud-Carrier, C., Brazdil, P., and Soares, C. (2004). Using meta-learning to supportdata mining. International Journal of Computer Science Applications , 1.Wirth, R. and Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining.Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discoveryand Data Mining .Woźnica, K. and Biecek, P. (2022). Towards explainable meta-learning. In Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases: International Workshops of ECMLPKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I , pages 505–520.Wright, M. N. and Ziegler, A. (2017). ranger: A Fast Implementation of Random Forests for HighDimensional Data in C++ and R. Journal of Statistical Software , 77(1):1–17.A Source CodeThe source code of the experiments, prepared visualizations, and tables from Appendix C isavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments as the forester_benchmark.Rmd file. The markdown notebook file describesthe installation process, and it can be safely executed with the guidance of our remarks betweenthe code chunks.B ResourcesAs mentioned in the Section 6, our team was limited in computational power. The experiment wasconducted on our private PC with 32GB of RAM, CPU: 11th Gen Intel(R) Core(TM) i7-11700KF @3.60GHz (16 cores), and the GPU: NVIDIA GeForce RTX 3070 Ti, however as the forester is not yetimplemented to work on the GPU, only the CPU was used.C Raw resultsIn this section we provide information about the raw results mentioned in the Section 6 which wereused in the Figure 2. Raw results for train, test, and validation datasets are available in the GitHubrepository https://github.com/ModelOriented/forester/tree/main/misc/experiments/raw_training_results . In this section we offer the results aggregated as the mean values of the metricswhich are presented in the Table 3, Table 4, and Table 5 for the binary classification tasks. Thesetables also broaden our perspective by providing AUC and F1 values. The results for the regressiontasks are presented in the Table 6, Table 7, and Table 8. These tables also broaden our perspectiveby providing MSE, R2, and MAE values.13Table 3: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification training datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.929 0.923 0.905blood-transfusion-service-center forester 0.77 0.752 1blood-transfusion-service-center H2O 0.7 0.682 0.519breast-w forester 1 1 1breast-w H2O 0.998 0.998 0.997credit-approval forester 0.999 1 1credit-approval H2O 0.961 0.959 0.955credit-g forester 0.967 0.998 1credit-g H2O 0.906 0.855 0.938diabetes forester 0.991 0.999 1diabetes H2O 0.874 0.871 0.826kr-vs-kp forester 1 1 1kr-vs-kp H2O 0.999 0.999 0.965phoneme forester 1 1 1phoneme H2O 1 1 1Table 4: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification testing datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 0.995 0.995 1banknote-authentication H2O 0.933 0.927 0.915blood-transfusion-service-center forester 0.796 0.772 0.976blood-transfusion-service-center H2O 0.713 0.707 0.54breast-w forester 0.976 0.984 0.986breast-w H2O 0.971 0.97 0.959credit-approval forester 0.885 0.931 0.942credit-approval H2O 0.882 0.882 0.87credit-g forester 0.733 0.79 0.865credit-g H2O 0.743 0.64 0.829diabetes forester 0.768 0.823 0.799diabetes H2O 0.753 0.727 0.643kr-vs-kp forester 0.994 0.999 0.991kr-vs-kp H2O 0.991 0.991 0.991phoneme forester 0.909 0.96 0.867phoneme H2O 0.904 0.895 0.84214Table 5: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification validation datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.916 0.908 0.887blood-transfusion-service-center forester 0.775 0.773 0.833blood-transfusion-service-center H2O 0.675 0.68 0.509breast-w forester 0.938 0.968 0.956breast-w H2O 0.967 0.97 0.953credit-approval forester 0.855 0.908 0.939credit-approval H2O 0.867 0.862 0.842credit-g forester 0.705 0.788 1credit-g H2O 0.758 0.635 0.846diabetes forester 0.747 0.803 0.866diabetes H2O 0.755 0.735 0.656kr-vs-kp forester 0.99 0.999 0.99kr-vs-kp H2O 0.99 0.99 0.99phoneme forester 0.901 0.954 0.851phoneme H2O 0.9 0.896 0.839Table 6: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression training datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.697 0.5 0.974 0.4232dplanes H2O 0.984 0.969 0.95 0.785bank32nh forester 0.001 0 1 0.001bank32nh H2O 0.054 0.003 0.806 0.037elevators forester 0.001 0 0.978 0.001elevators H2O 0.002 0 0.942 0.001kin8nm forester 0.012 0 0.997 0.009kin8nm H2O 0.066 0.004 0.937 0.051Mercedes_Benz_Greener_Manufacturing forester 2.456 6.13 0.963 0.775Mercedes_Benz_Greener_Manufacturing H2O 7.806 61.115 0.625 4.935pol forester 1.139 1.483 0.999 0.699pol H2O 1.803 3.251 0.998 0.829wine_quality forester 0.071 0.005 0.993 0.031wine_quality H2O 0.161 0.027 0.965 0.12415Table 7: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression testing datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 1.003 1.007 0.948 0.8022dplanes H2O 1.004 1.008 0.948 0.802bank32nh forester 0.08 0.006 0.548 0.053bank32nh H2O 0.076 0.006 0.599 0.05elevators forester 0.002 0 0.884 0.002elevators H2O 0.002 0 0.911 0.001kin8nm forester 0.113 0.013 0.816 0.087kin8nm H2O 0.084 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 7.554 57.195 0.626 5.039Mercedes_Benz_Greener_Manufacturing H2O 7.583 57.598 0.623 5.222pol forester 4.739 22.508 0.987 2.242pol H2O 3.198 10.278 0.994 1.3wine_quality forester 0.614 0.377 0.505 0.451wine_quality H2O 0.604 0.365 0.521 0.43Table 8: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression validation datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.999 0.997 0.948 0.7992dplanes H2O 1 0.999 0.948 0.8bank32nh forester 0.082 0.007 0.544 0.053bank32nh H2O 0.078 0.006 0.591 0.052elevators forester 0.002 0 0.875 0.002elevators H2O 0.002 0 0.907 0.001kin8nm forester 0.111 0.012 0.822 0.085kin8nm H2O 0.083 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 8.464 73.039 0.559 5.261Mercedes_Benz_Greener_Manufacturing H2O 8.458 72.911 0.56 5.373pol forester 4.379 19.256 0.989 1.885pol H2O 3.01 9.087 0.995 1.213wine_quality forester 0.632 0.399 0.478 0.466wine_quality H2O 0.624 0.389 0.492 0.447D Used assetsIn this section we describe the packages used for both forester , and the experiments. The packagesoutside of the forester required for the experiments are listed in the Table 9. Additional requirementfor the catboost andH2O packages is installed Java. The packages required by the forester as wellas their versions used during the experiment are presented in the Table 10.16Table 9: The packages and their versions under which the experiments were executed and supplementalmaterials were created.package version licensexlsx 0.6.5 GPL-3stringr 1.5.0 MITggbeeswarm 0.6.0 GPL (>= 2)dplyr 1.0.10 MITggplot2 3.4.0 MITtictoc 1.1 Apache License (== 2.0)H2O 3.38.0.1 Apache License (== 2.0)forester 1.2.1 GPL-3OpenML 1.12 BSD_3_clauseTable 10: The forester package’s dependencies and their versions used during the experiments.package version licenceBoruta 7.0.0 GPL (>= 2)catboost 1.1.1 Apache License (== 2.0)crayon 1.5.2 MITDALEX 2.4.2 GPLdata.table 1.14.2 MPL-2.0ggplot2 3.4.0 MITggradar 0.2 GPLggrepel 0.9.3 GPL-3knitr 1.40 GPLlightgbm 3.3.2 MITmice 3.14.0 GPL-2 | GPL-3mltools 0.3.5 MITParBayesianOptimization 1.2.4 GPL-2partykit 1.2-16 GPL-2 | GPL-3pROC 1.18.0 GPL (>= 3)ranger 0.14.1 GPL-3rcompanion 2.4.18 GPL-3rmarkdown 2.16 GPL-3splitTools 0.3.2 GPL (>= 2)testthat 3.1.6 MITtibble 3.1.8 MITtinytex 0.43 MITvarhandle 2.0.5 GPL (>= 2)xgboost 1.6.0.1 Apache License (== 2.0)stats 4.1.2 Part of R 4.1.217E Execution times comparisonIn this section we briefly explore the times needed for every experiment execution for both frame-works. The results presented in Table 11, and Table 12 show that final execution times differ, despitesetting exactly the same times for H2O experiment as the forester had. Our empirical results showthat the H2O runs lasted two times longer on average than the forester , which puts a differentlight on the comparison of the frameworks performance. Raw results needed for these tables areavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments/execution_times .Table 11: The comparison of mean execution times in seconds for the forester andH2O for binaryclassification experiments.task_name forester H2O difference relative differencebanknote-authentication 818.33 2521.33 -1703 0.28blood-transfusion-service-center 155.67 555.67 -400 0.26breast-w 451.33 797.33 -346 0.57credit-approval 805 1513 -708 0.53credit-g 2453 4234 -1781 0.58diabetes 1645.67 2643.67 -998 0.62kr-vs-kp 451.33 806.67 -355.33 0.57phoneme 2748.33 3695.33 -947 0.67Table 12: The comparison of mean execution times in seconds for the forester andH2O for regressionexperiments.task_name forester H2O difference relative difference2dplanes 401 1050.67 -649.67 0.38bank32nh 708.67 1214.67 -506 0.58elevators 720.33 1435.33 -715 0.5kin8nm 544.67 1564 -1019.33 0.35Mercedes_Benz_Greener_Manufacturing 848 1371.67 -523.67 0.61pol 756 1548.33 -792.33 0.49wine_quality 1317.33 2130 -812.67 0.63F Package comparisonWe have prepared a notebook showing the differences between the packages described in therelated work section. The document includes a comparison of package installation, a descriptionof available preprocessing, variable selection options, and model tuning. In addition, visual-izations, methods of explainable machine learning, report preparation, and reference to avail-able package documentation are described. We do not give a final assessment of the best pack-age because it could be subjective, but we expose the reader to criticism. Notebook is avail-able in the GitHub repository https://github.com/ModelOriented/forester/blob/main/misc/experiments/framework_comparison.Rmd .18Forester reportversion 1.2.12023-05-20 01:36:36This report contains details about the best trained model, table with metrics for every trained model, scatterplot for chosen metric and info about used data.The best modelsThis is the binary_clf task.The best model is: xgboost_RS_5 .The names of the models were created by a pattern Engine_TuningMethod_Id , where:•Engine describes the engine used for the training (random_forest, xgboost, decision_tree, lightgbm,catboost),•TuningMethod describes how the model was tuned (basic for basic parameters, RS for random search,bayes for Bayesian optimization),•Id for separating the random search parameters sets.More details about the best model are present at the end of the report.no. name accuracy auc f113 xgboost_RS_5 0.7919 0.8088 0.27917 ranger_RS_4 0.7785 0.6965 0.153818 lightgbm_RS_5 0.7785 0.7361 0.42112 xgboost_model 0.7718 0.7090 0.413814 lightgbm_RS_1 0.7718 0.7578 0.37044 ranger_RS_1 0.7651 0.7930 NaN6 ranger_RS_3 0.7651 0.7228 NaN10 xgboost_RS_2 0.7651 0.7801 NaN11 xgboost_RS_3 0.7651 0.7367 NaN16 lightgbm_RS_3 0.7651 0.7690 NaN21 lightgbm_bayes 0.7651 0.7340 0.36368 ranger_RS_5 0.7584 0.7579 0.052612 xgboost_RS_4 0.7517 0.6609 0.372919 ranger_bayes 0.7517 0.7333 0.244920 xgboost_bayes 0.7517 0.7409 0.24491 ranger_model 0.7450 0.7063 0.32143 lightgbm_model 0.7450 0.6842 0.38719 xgboost_RS_1 0.7450 0.6619 0.366715 lightgbm_RS_2 0.7181 0.6058 0.382417 lightgbm_RS_4 0.7181 0.6058 0.38241G Report exampleno. name accuracy auc f15 ranger_RS_2 0.7114 0.6929 0.2712Plots for all modelsxgboost_modellightgbm_RS_1xgboost_RS_5ranger_RS_4lightgbm_RS_500.51Metricaccuracyaucf1Model comparisonPlots for the best model - xgboost_RS_50.000.250.500.751.000.00 0.25 0.50 0.75 1.00specificitysensitivityROC Curve (AUC = 0.8088)6229112010 1TargetPredictionConfusion Matrix2Feature Importance for the best model - xgboost_RS_5xgb.Booster0.880 0.882 0.884 0.886 0.888V4V3V2V1Root mean square error (RMSE) loss after permutationscreated for the xgb.Booster modelFeature ImportanceDetails about data——————– CHECK DATA REPORT ——————–The dataset has 748 observations and 5 columns which names are:V1; V2; V3; V4; Class;With the target value described by a column: Class.No static columns.No duplicate columns.No target values are missing.No predictor values are missing.No issues with dimensionality.Strongly correlated, by Spearman rank, pairs of numerical values are:V2 - V3: 1;These observations migth be outliers due to their numerical columns values:1 10 116 342 496 497 498 499 5 500 501 503 504 505 506 518 529 747 748 ;Dataset is unbalanced with: 3.202247 proportion with 1 being a dominating class.3Columns names suggest that none of them are IDs.Columns data suggest that none of them are IDs.——————– CHECK DATA REPORT END ——————–The best model details------------ Xgboost model ------------Parametersniter: 20evaluation_log:iter : train_auc1 :2 :3 :4 :5 :6 :7 :8 :9 :10 :11 :12 :13 :14 :15 :16 :17 :18 :19 :20 :4
azRPrMrnKH
Q3DWpGoX7PD
automl.cc/AutoML/2023/ABCD_Track
2023
forester: A Novel Approach to Accessible and Interpretable AutoML for Tree-Based Modeling
["Anna Kozak", "Hubert Ruczy\u0144ski"]
The majority of AutoML solutions are developed in Python. However, a large percentage of data scientists are associated with the R language. Unfortunately, there are limited R solutions available with high entry level which means they are not accessible to everyone. To fill this gap, we present the $\textit{forester}$ package, which offers ease of use regardless of the user's proficiency in the area of machine learning. The $\textit{forester}$ package is an open-source AutoML package implemented in R designed for training high-quality tree-based models on tabular data. It supports regression and binary classification tasks. A single line of code allows the use of unprocessed datasets, informs about potential issues concerning them, and handles feature engineering automatically. Moreover, hyperparameter tuning is performed by Bayesian optimization, which provides high-quality outcomes. The results are later served as a ranked list of models. Finally, the $\textit{forester}$ package offers a vast training report, including the ranked list, a comparison of trained models, and explanations for the best one.
["machine learning", "automated machine learning", "tree-based models", "automated reporting"]
forester: A Novel Approach to Accessible and InterpretableAutoML for Tree-Based ModelingAnna Kozak1Hubert Ruczyński11Warsaw University of TechnologyAbstract The majority of AutoML solutions are developed in Python. However, a large percentageof data scientists are associated with the R language. Unfortunately, there are limitedR solutions available with high entry level which means they are not accessible to everyone.To fill this gap, we present the forester package, which offers ease of use regardless of theuser’s proficiency in the area of machine learning.The forester package is an open-source AutoML package implemented in R designed fortraining high-quality tree-based models on tabular data. It supports regression and binaryclassification tasks. A single line of code allows the use of unprocessed datasets, informsabout potential issues concerning them, and handles feature engineering automatically.Moreover, hyperparameter tuning is performed by Bayesian optimization, which provideshigh-quality outcomes. The results are later served as a ranked list of models. Finally, theforester package offers a vast training report, including the ranked list, a comparison oftrained models, and explanations for the best one.1 IntroductionMachine learning is being used more and more in the world around us. Every day, models arecreated to assist doctors (Shimizu and Nakayama, 2020), financiers (Jorge et al., 2022), or tourists(Fararni et al., 2021). With the increasing demand for model building, research is being conductedon automatically developing tools to build artificial intelligence based solutions.Many types of models are used in machine learning, such as decision rules (scoring card model) tocomplex neural network structures modeling natural language (large language models, for example,ChatGPT (Bavarian et al., 2022)). Viewing machine learning in terms of tabular data, we havea wide range of models available, from decision trees and linear or logistic regression to randomforests, SVM, or neural networks. However, tree-based models are the most widely used; the mainreason behind this is their high predictive efficiency. A simple decision tree model gives relativelysatisfactory results, but using multiple trees to create a random forest allows significantly higherpredictive power (Caruana et al., 2008; Grinsztajn et al., 2022).Automating the process to build machine learning models can include many different components.For example, the CRoss Industry Standard Process for Data Mining (CRISP-DM) (Wirth and Hipp,2000) is the most common methodology for data mining, analytics, and data science projects. Butthe basic framework of an automatic machine learning system is the preparation of models basedon data entered by the user. This process can be extended in various directions; for example,a preliminary analysis of the given data can be taken care of to look for potential data errorsor outlier observations, i.e. exploratory data analysis. Another essential element may be thesearch space of the model’s hyperparameters. Optimization of hyperparameters can be based onsimple methods such as a predefined parameter grid or random search. Another way to selecthyperparameters is to use Bayesian optimization (Snoek et al., 2012) or meta-learning (Vilalta et al.,2004; Vanschoren, 2019; Woźnica and Biecek, 2022). After tuning the models with hyperparameteroptimization, the next step we can add is to analyze the results in the form of a leaderboardAutoML 2023 Workshop Track ©2023 the authors, released under CC BY 4.0or visualization. By extending with explanatory methods (Biecek and Burzykowski, 2021) andreporting, the entire machine learning process can be finalized.Automating the process of machine learning allows access to data science tools for people who arestarting in data analysis and modeling. At the same time, it is an improvement and speeds up thework of experienced data scientists, who can make at least baseline models using a single line ofcode.In this paper, we present the AutoML package written for the R (R Core Team, 2022) to createmodels for regression and binary classification tasks on tabular data. The main goals of the packageare: making the package easy to use, fully automating all the necessary steps inside the ML pipeline,and providing results that are easy to create, understand and allow diagnostics of the models.The availability of responsible machine learning methods in the solution allows the results ofcomplex models to be interpreted. Changing the focus from obtaining the best possible outcomesto the interpretability of the results is a novelty for the AutoML tools. The implementation of theforester package can be found in our GitHub repository1. The software is open source and containscomprehensive documentation with examples of use.2 Related worksPackages for AutoML are prevalent in Python. The first AutoML solutions like Auto-WEKA(Thornton et al., 2013), was followed by Auto-Sklearn (Feurer et al., 2015, 2022) and TPOT (Tree-Based Pipeline Optimization Tool) (Olson et al., 2016) which was one of the very first AutoMLmethods and open-source software packages developed for the data science community in Python.But in R, there are few approaches. One of them is the H2O package (LeDell et al., 2022). It isan open-source library that is an in-memory, distributed, fast, and scalable machine learningand predictive analytics platform that creates a ranked list of models easily exported for use ina production environment. The authors have created an easy-to-use interface that automates thetraining of multiple candidate models. H2O’s AutoML is also designed for more advanced users byproviding a simple wrapper function that performs many modeling tasks. H2O’s AutoML processautomatically trains models and tunes them at user-specified times. To better understand the qualityof models in H2O, we can rely on metrics such as R2and mean square error (MSE). For comparison,in the forester package, we can compare models using the most commonly used metrics or evendefine a new custom metric. What particularly distinguishes the forester package from H2O isthe preprocessing. In the latter’s case, it only includes target encoding and is in the experimentalstage. In the forester package, we have more accurate and extensive preprocessing. In addition,H2O always requires Java to work, so the user must also install it.The second widely-used framework is the mlr3 package (Lang et al., 2019) which provides a frame-work for classification, regression, survival analysis, and other ML tasks such as cluster analysis.It provides the ability to perform hyperparameter tuning and feature selection. The package iswell-documented, contains many functions and models, and provides many capabilities. However,it is different from a typical package for AutoML, as creating models requires knowledge of how todo it and some time to assemble such a model. It also has its drawbacks, such as the need for morepreprocessing, which would help to use it more easily, for example, the XGBoost model, whichhas to have only numerical data without factors. There is also no way to divide the collection intotraining, testing, and validation subsets. The mlr3 package provides functionality that builds onthe basic components of machine learning. It can be extended to include preprocessing, pipelining,visualization, additional learners, additional task types, and more. To create these properties, weneed to install many other libraries. In the forester package, we provide these components at once,and with a single function, we can perform preprocessing, prepare visualization of the results1https://github.com/ModelOriented/forester2Model training and tuningData checkData preparationDecisionmakingforesterfeaturesModel evaluationMissing values,Correlated features, Irrelevant columnsData splitting,Preprocessing,Data imputationDefault parameters,Random search,Bayesian OptimizationRanked list,Customizable metricssave(),report(),explain()(1)(2)(3)(4)Raw dataFigure 1: A diagram presenting the forester pipeline. The forester analyses poor-quality data with thein-built data check (1), which points to possible issues, and later data preparation (2) handlesthem during the preprocessing. In the next step, the models are trained with default andrandom searched parameters and tuned with a Bayesian optimization algorithm (3). In theend, trained models are evaluated (4) and presented as a ranked list. In addition, the packageoffers the user additional features.and generate a report. A more detailed comparison of the forester package with H2O andmlr3 ispresented in Appendix F.3forester AutoMLTheforester is an AutoML package automating the machine learning pipeline, starting from the datapreparation, through model training, to the interpretability of the results. This way, we minimize theuser’s time performing basic and often repetitive activities related to the machine-learning process.Despite the high automation of the pipeline shown in Figure 1, we expose multiple parameterswhich advanced data scientists can use to customize the model creation. The whole package relieson the four pillars described in this section.1.Data checkThe first one, called data check, concerns a data preparation phase. Data preparation is a crucialpart of the modeling process (Rutkowski et al., 2010), so we cannot blindly assume a single wayof transforming the data for all cases. Appropriate data preprocessing is crucial to buildinga model with a small error rate. To face that issue, we introduce a data check report summarizingthe dataset with some basic information and pointing out possible problems. Data problems canaffect the following modeling stages and be relevant to any model. The data check report pointsout id-like, duplicated, static, or highly correlated columns. Moreover, it points out the outliers,missing values, and the imbalance of the target. This way we can propose some simple heuristicdata preprocessing methods, yet more advanced users are able to fight the issues mentioned bystudying the data check report on their own.32.Data preparationPreparing the data for modeling is another crucial aspect after checking the data. It can bedone using a dedicated tool, but the forester package offers two general-purpose preprocessingmethods, basic and advanced. The main purpose of this function is to remove the need toprepare data manually differently for different types of models. The basic preparation consistsof the actions that are necessary for the package to work that is: the removal of static columns,binarization of the target variable, and imputation of the missing data using the MICE algorithm(Buuren and Groothuis-Oudshoorn, 2011). The advanced method additionally includes theremoval of id-like columns (features suspected of being id), removal of highly correlated columns(Spearman’s rank for the numerical features, and Crammer’s V rank for categorical features) aswell as feature selection with the BORUTA algorithm (Kursa and Rudnicki, 2010). Additionally,every model in the forester package requires a different data format which is also prepared insidethe main function.3.Model training and tuningTheforester package’s third and most important pillar is model training and tuning. Our solutionfocuses on the tree-based model family because of their high-quality performance for varioustabular data tasks. We’ve limited ourselves to 5 well-known engines with different strong andweak points, so they complement each other.We have included the basic decision tree from partykit package (Hothorn and Zeileis, 2015)as an extremely light engine, but mostly, we have focused on the ensemble models. The onlybagging representative is the random forest from the ranger package (Wright and Ziegler, 2017),which is reluctant to overfit.We have also considered three different boosting algorithms. The XGBoost model (Chen andGuestrin, 2016) is highly effective, but due to the need for one hot encoding, it suffers from theabundance of categorical features. However, the LightGBM model (Ke et al., 2017), which worksbest for medium and large datasets, has problems with the small ones. The last engine is theCatBoost (Prokhorenkova et al., 2018) which can achieve superior performance but requires theJava environment installed, which is a minor inconvenience.The models are trained with three approaches: using the default parameters, performing therandom search algorithm within the predefined parameter space, and running an advancedBayesian Optimization algorithm for fine-grained tuning. The first method is the baselinefor other models. With the second one, we can cheaply create multiple models and explorevarious parameter combinations. The best and most time-consuming method is the BayesianOptimization from the ParBayesianOptimization package. However, it is extremely useful forcomplex tasks.4.Model evaluationThe last pillar is the automatic evaluation of the trained models. The forester package assessesevery trained model by various metrics, such as accuracy, area under the receiver operatingcharacteristic curve (AUC), and F1 for the binary classification tasks, and Root Mean SquaredError (RMSE), Mean Absolute Error (MAE), or R2for the regression tasks. The results are laterpresented as a ranked list sorted by the outcomes (for example, ascending order for RMSE, anddescending for AUC). Moreover, the user can define their metrics and provide them for theevaluation phase.4forester featuresOne of the most important goals for the forester package is the convenience of use and helping theusers to focus more on analyzing the results instead of writing the code. To obtain such a user-friendly environment, the forester offers plenty of additional features useful for data scientists.44.1 Model explanationsIn recent years, interpretable machine learning has become a significant trend in machine learning.The tools providing interpretability such as DALEX (Biecek, 2018) or iml(Molnar et al., 2020)allow data scientists to explain how the models they create work, making it easier to detecttheir misbehavior. Models’ explainability also enhances trust in such tools, even in demandingenvironments like medical researchers. To support using explainable methods for the modelstrained by the forester , we have created a wrapper for the DALEX explainer compatible with ourpackage. This way, the user can easily create various explanations for the trained models.4.2 Saving the outcomesAnother crucial feature is the save function, which lets the user save the training output. Returnedforester object contains lots of information, such as preprocessed dataset, split datasets, split indexes,ranked lists for training, testing, and validation datasets, the predictions of the model, and muchmore. The abundance of objects makes it incredibly important to save the outcomes after thetime-consuming training process.4.3 Automated reportLast but not least, our solution offers an automatically generated report that helps users quicklyand easily analyze the training results. The main goal of this feature is to ensure that every useris able to easily assess the quality of the trained models. The report consists of basic informationabout the dataset, a data check report, a ranked list of the best ten models, and visualizationsconcerning model quality. An example report for the blood-transfusion-service-center dataset (fromthe OpenML-CC18 benchmark (Bischl et al., 2021)) is provided in Appendix G.The plots are divided into two groups; the first one compares the outcomes of different models,which helps to decide which model is the best. For example, guided by the radar chart comparisonplot, we can choose the model with slightly worse accuracy, but better AUC and F1 values.The second type of plots concentrates on the model with the best performance, and its mostprominent feature is providing a feature importance plot. This visualization lets us understandwhich variables are the most important for the model; thus, we can evaluate its correctness.It is worth noticing that the reports, mostly visualizations, are different for binary classificationand regression tasks as we measure their performance differently.5 User interface5.1 Training functionThe forester ’s main train() function runs the entire AutoML pipeline, including the data prepa-ration, model training, and evaluation. To keep the package as simple as possible, the functionrequires only the dataset and target column name (Listing 1); however, to keep the tool versatile,there are lots of custom parameters for more advanced users (Listing 2). With the latter option, theuser can specify the amount of Bayesian optimization iterations, the number of random searchevaluations, proportions of the train, test, and validation subsets, change the preprocessing methodsor even add their evaluation metric.train _ output←train ( data = lisbon , y = 'Price ')Listing 1: Training models with the forester package and default parameters.5train _ output←train ( data = lisbon ,y = 'Price ',verbose = TRUE ,engine = c( 'ranger ','xgboost ','decision _tree ','lightgbm ','catboost '),train _ test _ split = c(0.6 , 0.2 , 0.2) ,bayes _ iter = 10,random _ evals = 3,advanced _ preprocessing = FALSE ,metrics = 'auto ',sort _by = 'auto ',metric _ function = NULL ,metric _ function _ name = NULL ,metric _ function _ decreasing = TRUE ,best _ model _ number = 5)Listing 2: Training models with the forester package and custom parameters.5.2 Extensive featuresApart from the train() function, the user can utilize additional functions, which is helpful duringthe modeling process. The check_data() function (Listing 3) enables printing a data check reportoutside of the train() function. The save() function (Listing 4) lets us save the outcome of thetraining process, whereas the report() function (Listing 5) creates a training report. The lastextension is the explain() function (Listing 6), which creates a DALEX explainer that can be usedto generate multiple visualizations concerning the model interpretability with the DALEX package.check _ data ( data = `blood - transfusion - service - center `, y = 'Class ')Listing 3: Generating a data check report.save ( train _ output , name = 'train _ output .RData ')Listing 4: Saving the train output.report ( train _ output , 'report .pdf ')Listing 5: Generating a report from the train output.exp←explain ( models = train _ output $ best _ models [[1]] ,test _ data = train _ output $data ,y = train _ output $y,verbose = FALSE )Listing 6: Creating a model explainer, that lets us use functions from the DALEX package.6 PerformanceTo evaluate the performance of the package, we’ve decided to compare it to the H2O framework onthe binary classification tasks from the OpenML-CC18 benchmark (Bischl et al., 2021) and regressiontasks from OpenML (Vanschoren et al., 2013). Due to the limited computational resources, we havechosen a subset of 8 datasets for classification and 7 for regression described in Table 1 and Table2, respectively. The binary classification datasets consisted mainly of categorical variables andcontained many missing values, a significant obstacle for both solutions, whereas the regressiontasks had no missing values and mostly numeric or binary values.6During the experiment, we trained the forester package three times for each dataset with randomseeds provided for the data splitting function inside the forester . The same splits were later usedfor the H2O framework. A singular training iteration was executed for the decision tree, randomforest, LightGBM, and CatBoost engines with ten iterations of the Bayesian optimization and tenrandom search evaluations. For the regression task we’ve additionally added an XGboost engine.To ensure that both frameworks had the same amount of time, we have measured it for every forestertraining iteration, and provided it to the respective H2O AutoML runs. This H2O functionalitydidn’t work as supposed, and finally this framework had two times longer training time on average.This factor definitely improved the H2Os results, and we have to bear that in mind during theoutcomes comparison. For further details see Appendix E. Additionally, to ensure the same datasplit, we have used the indexes saved during the forester training. The source codes are included inAppendix A.The comparison of performance for both frameworks is presented in Figure 2 and Figure 3. Forthe raw results, as well as aggregated tabular ones, see Appendix C. As one can see, for thebinary classification task, the forester outperformed the H2O framework on five datasets: banknote-authentication ,blood-transfusion-service-centre ,credit-approval ,credit-g , and diabetes . The outcomesfor very simple datasets kr-vs-kp andbreast-w were similar, and H2O obtained better performancefor the phoneme data. For the regression tasks, the results were comparable to the H2O’s for mosttasks or slightly worse, as for the poldataset. The results show that the forester creates high-qualitymodels that are competitive with the existing solutions.However, our conclusions cannot be too far-fetched since we tested the package for only a few setsfor binary classification and regression tasks. We cannot say that the forester package’s predictivepower is better than H2O, but they clearly are competitive.Table 1: A subset of OpenML-CC18 benchmark datasets used during the evaluation process of theforester package, which are tabular data objects presenting the binary classification tasks.The features are mostly categorical, and they contain lots of missing values.Name Number of columns Number of rowskr-vs-kp 37 3196breast-w 10 699credit-approval 16 690credit-g 21 1000diabetes 9 768phoneme 6 5404banknote-authentication 5 1372blood-transfusion-service-center 5 748Table 2: A subset of OpenML datasets used during the evaluation process of the forester package,which are tabular data objects presenting the regression tasks. In this case there were nomissing values, and the features were mostly numerical or binary.Name Number of columns Number of rowsbank32nh 33 8192wine_quality 12 6497Mercedes_Benz_Greener_Manufacturing 378 4209kin8nm 9 8192pol 49 150002dplanes 11 40768elevators 19 165997banknoteauthenticationbloodtransfusionservicecenterbreastwcreditapprovalcreditgdiabeteskr vs kpphoneme0.5 0.6 0.7 0.8 0.9 1.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainAccuracyDatasetFramework forester for the binary classification taskPerformance comparison of forester and H2OH2OFigure 2: Performance comparison for forester and H2O frameworks for the datasets described inTable 1. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot clearlyshows us that the forester performs better than the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.2dplanesbank32nhelevatorskin8nmMercedes_Benz_Greener_Manufacturingpolwine_quality0.0 2.5 5.0 7.5 10.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainRMSEDatasetfor the regression taskPerformance comparison of forester and H2OFramework forester H2OFigure 3: Performance comparison for forester and H2O frameworks for the datasets described inTable 2. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot showsus that the forester performs comparably to the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.7 Limitations and Broader Impact StatementThe forester package has limitations in the availability of models. The library contains only tree-based models, but this family proves to be extremely versatile. Only binary classification andregression are available in the current version of the package. Preparing models for multi-criteriaclassification, cluster analysis, or survival analysis is currently impossible. However, these featurescan be easily implemented in the future. The package currently performs better with smallerdatasets; a large allocation of memory and time is needed for large and complex data.8One of the strongest points of the forester package is being incredibly easy to use, even if we donot have broad machine learning expertise. This approach, however, raises the risk that the modelstrained with the package will be of poor quality, for example, due to the training on a low-qualitydataset, or that the outcomes will be misunderstood or incorrectly interpreted by the inexperienceduser. The reporting module addresses all of these responsible machine learning concerns, whichinforms about possible issues with the data, measures the quality of the models, and provides theirexplanations.8 ConclusionsThis paper presents an R package for AutoML, creating models for regression and binary classifica-tion tasks conducted on tabular data. Our solution addresses the needs we have observed in AutoMLtools in various programming languages. The main goals of the package are to keep the packagestable and easy to use, to automate all the necessary steps inside the ML pipeline, and to provideresults that are easy to create, understand and allow for diagnostics of the models. To achieve theseresults, we have focused only on the best representatives from the family of tree-based modelsthat show superiority over other methods on tabular data. Furthermore, we provide additionalfunctions that allow the user to save the models, create explanations and create a report describingthe learning process and explaining the developed models. Experiments carried out tentativelyindicate that more predictive power is obtained using our solution than currently existing solutionsin R.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] We introduced the forester package and described itspotential. The Section 3 and Section 4 describe the various features.(b) Did you describe the limitations of your work? [Yes] See Section 7.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 7.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] We believe thatour paper conforms to the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We have notheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We have no theoreticalresults.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an in-structive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] See Appendix A.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] The most important results analyzed in this paper are presented or mentioned(via a link) in the Appendix C.9(c)Did you include scripts and commands that can be used to generate the figures and tables inyour paper based on the raw results of the code, data, and instructions given? [Yes] The codeis available on the package’s GitHub repository in the form of R Markdown notebook, seeAppendix A.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] The code is available on the package’s GitHubrepository in the form of R Markdown notebook, see Appendix A.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces,fixed hyperparameter settings, and how they were chosen)? [Yes] The training details arementioned in the main paper Section 6, as well as in the source code described in AppendixA.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] The methods were compared on the same train, test,and validation subsets, and the hyperparameter search space was the default one for eachAutoML framework.(g)Did you run ablation studies to assess the impact of different components of your approach?[No] The package at this point is pretty straightforward and doesn’t contain many com-ponents that could alter the outcomes. A possible ablation study could be applied to theadvanced preprocessing method, however, we did not have enough computational powerfor running the benchmark again.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] The modelswere compared by the same metrics for classification: accuracy, AUC and F1 and forregression: RMSE, MSE, R2i MAE.(i)Did you compare performance over time? [No] We did not have enough resources formultiple experiments executions.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]As described in the Section 6, we’ve performed three runs of the forester and H2O trainingwith the random seeds set for the train, test, and validation splits as the values 123, 2137,and 21.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not have error bars on the visualizations, but we provideexact values without any statistical aggregations.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We useda tabular benchmark consisting of 8 datasets describing the binary classification tasks fromthe OpenML-CC18 benchmark, as described in Section 6.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] See Appendix B.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] During the experiments, all computa-tions were conducted by the AutoML frameworks, and no additional tuning was included.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] A full list of the citedpapers/tools is described in the references.10(b)Did you mention the license of the assets? [Yes] Used assets, mostly R packages, aredescribes in the Appendix D.(c)Did you include any new assets either in the supplemental material or as a url? [Yes]The forester package is a new asset https://github.com/ModelOriented/forester .(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [Yes] See Section 6, we are using OpenML-CC18 and its data. We cited alldata sources according to the guidelines of datasets on OpenML (and in OpenML-CC18).(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] Our data does not contain personally identifiableinformation or offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not do research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not do research with human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not do research with human subjects.Acknowledgements. We would like to thank Adrianna Grudzień and Patryk Słowakiewicz for theirdevelopment work on the forester package. We also thank Katarzyna Woźnica, Hubert Baniecki,Mikołaj Spytek, and Mateusz Krzyziński for their valuable comments about the study.ReferencesBavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. (2022).Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255 .Biecek, P. (2018). DALEX: Explainers for Complex Predictive Models in R. Journal of MachineLearning Research , 19(84):1–5.Biecek, P. and Burzykowski, T. (2021). Explanatory Model Analysis . Chapman and Hall/CRC, NewYork.Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn,J. N., and Vanschoren, J. (2021). OpenML benchmarking suites. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) .Buuren, S. and Groothuis-Oudshoorn, C. (2011). MICE: Multivariate Imputation by ChainedEquations in R. Journal of Statistical Software , 45.Caruana, R., Karampatziakis, N., and Yessenalina, A. (2008). An empirical evaluation of supervisedlearning in high dimensions. Proceedings of the 25th International Conference on Machine Learning ,pages 96–103.Chen, T. and Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16,page 785–794.11Fararni, K. A., Nafis, F., Aghoutane, B., Yahyaouy, A., Riffi, J., and Sabri, A. (2021). Hybrid recom-mender system for tourism based on big data and AI: A conceptual framework. Big Data Miningand Analytics , 4(1):47–55.Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2022). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. In Advances in Neural Information Processing Systems ,volume 28.Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why do tree-based models still outperformdeep learning on typical tabular data? In Thirty-sixth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track .Hothorn, T. and Zeileis, A. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R.Journal of Machine Learning Research , 16(118):3905–3909.Jorge, C. C., Antonio, O. A. J., Hugo, G. M. V., and Hugo, O. P. D. (2022). Machine Learning forPersonal Credit Evaluation: A Systematic Review. WSEAS TRANSACTIONS ON COMPUTERRESEARCH , 10:62–73.Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: AHighly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information ProcessingSystems , volume 30.Kursa, M. B. and Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal ofStatistical Software , 36(11):1–13.Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff,L., and Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R.Journal of Open Source Software , 4(44):1903.LeDell, E., Gill, N., Aiello, S., Fu, A., Candel, A., Click, C., Kraljevic, T., Nykodym, T., Aboyoun, P.,Kurka, M., and Malohlava, M. (2022). h2o: R Interface for the ’H2O’ Scalable Machine LearningPlatform . R package version 3.38.0.1.Molnar, C., Casalicchio, G., and Bischl, B. (2020). Interpretable machine learning – a brief history,state-of-the-art and challenges. In ECML PKDD 2020 Workshops , pages 417–431.Olson, R. S., Bartley, N., Urbanowicz, R. J., and Moore, J. H. (2016). Evaluation of a Tree-basedPipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic andEvolutionary Computation Conference 2016 , GECCO ’16, pages 485–492.Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. (2018). CatBoost:unbiased boosting with categorical features. In Advances in Neural Information ProcessingSystems , volume 31.R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation forStatistical Computing, Vienna, Austria.Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L., and Zurada, J. (2010). Artificial Intelligenceand Soft Computing, Part II: 10th International Conference, ICAISC 2010 .12Shimizu, H. and Nakayama, K. I. (2020). Artificial intelligence in oncology. Cancer Science ,111(5):1452–1460.Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machinelearning algorithms. In Advances in Neural Information Processing Systems , volume 25.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Vanschoren, J. (2019). Meta-Learning , pages 35–61. Springer International Publishing, Cham.Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). Openml: networked science inmachine learning. SIGKDD Explorations , 15(2):49–60.Vilalta, R., Giraud-Carrier, C., Brazdil, P., and Soares, C. (2004). Using meta-learning to supportdata mining. International Journal of Computer Science Applications , 1.Wirth, R. and Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining.Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discoveryand Data Mining .Woźnica, K. and Biecek, P. (2022). Towards explainable meta-learning. In Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases: International Workshops of ECMLPKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I , pages 505–520.Wright, M. N. and Ziegler, A. (2017). ranger: A Fast Implementation of Random Forests for HighDimensional Data in C++ and R. Journal of Statistical Software , 77(1):1–17.A Source CodeThe source code of the experiments, prepared visualizations, and tables from Appendix C isavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments as the forester_benchmark.Rmd file. The markdown notebook file describesthe installation process, and it can be safely executed with the guidance of our remarks betweenthe code chunks.B ResourcesAs mentioned in the Section 6, our team was limited in computational power. The experiment wasconducted on our private PC with 32GB of RAM, CPU: 11th Gen Intel(R) Core(TM) i7-11700KF @3.60GHz (16 cores), and the GPU: NVIDIA GeForce RTX 3070 Ti, however as the forester is not yetimplemented to work on the GPU, only the CPU was used.C Raw resultsIn this section we provide information about the raw results mentioned in the Section 6 which wereused in the Figure 2. Raw results for train, test, and validation datasets are available in the GitHubrepository https://github.com/ModelOriented/forester/tree/main/misc/experiments/raw_training_results . In this section we offer the results aggregated as the mean values of the metricswhich are presented in the Table 3, Table 4, and Table 5 for the binary classification tasks. Thesetables also broaden our perspective by providing AUC and F1 values. The results for the regressiontasks are presented in the Table 6, Table 7, and Table 8. These tables also broaden our perspectiveby providing MSE, R2, and MAE values.13Table 3: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification training datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.929 0.923 0.905blood-transfusion-service-center forester 0.77 0.752 1blood-transfusion-service-center H2O 0.7 0.682 0.519breast-w forester 1 1 1breast-w H2O 0.998 0.998 0.997credit-approval forester 0.999 1 1credit-approval H2O 0.961 0.959 0.955credit-g forester 0.967 0.998 1credit-g H2O 0.906 0.855 0.938diabetes forester 0.991 0.999 1diabetes H2O 0.874 0.871 0.826kr-vs-kp forester 1 1 1kr-vs-kp H2O 0.999 0.999 0.965phoneme forester 1 1 1phoneme H2O 1 1 1Table 4: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification testing datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 0.995 0.995 1banknote-authentication H2O 0.933 0.927 0.915blood-transfusion-service-center forester 0.796 0.772 0.976blood-transfusion-service-center H2O 0.713 0.707 0.54breast-w forester 0.976 0.984 0.986breast-w H2O 0.971 0.97 0.959credit-approval forester 0.885 0.931 0.942credit-approval H2O 0.882 0.882 0.87credit-g forester 0.733 0.79 0.865credit-g H2O 0.743 0.64 0.829diabetes forester 0.768 0.823 0.799diabetes H2O 0.753 0.727 0.643kr-vs-kp forester 0.994 0.999 0.991kr-vs-kp H2O 0.991 0.991 0.991phoneme forester 0.909 0.96 0.867phoneme H2O 0.904 0.895 0.84214Table 5: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification validation datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.916 0.908 0.887blood-transfusion-service-center forester 0.775 0.773 0.833blood-transfusion-service-center H2O 0.675 0.68 0.509breast-w forester 0.938 0.968 0.956breast-w H2O 0.967 0.97 0.953credit-approval forester 0.855 0.908 0.939credit-approval H2O 0.867 0.862 0.842credit-g forester 0.705 0.788 1credit-g H2O 0.758 0.635 0.846diabetes forester 0.747 0.803 0.866diabetes H2O 0.755 0.735 0.656kr-vs-kp forester 0.99 0.999 0.99kr-vs-kp H2O 0.99 0.99 0.99phoneme forester 0.901 0.954 0.851phoneme H2O 0.9 0.896 0.839Table 6: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression training datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.697 0.5 0.974 0.4232dplanes H2O 0.984 0.969 0.95 0.785bank32nh forester 0.001 0 1 0.001bank32nh H2O 0.054 0.003 0.806 0.037elevators forester 0.001 0 0.978 0.001elevators H2O 0.002 0 0.942 0.001kin8nm forester 0.012 0 0.997 0.009kin8nm H2O 0.066 0.004 0.937 0.051Mercedes_Benz_Greener_Manufacturing forester 2.456 6.13 0.963 0.775Mercedes_Benz_Greener_Manufacturing H2O 7.806 61.115 0.625 4.935pol forester 1.139 1.483 0.999 0.699pol H2O 1.803 3.251 0.998 0.829wine_quality forester 0.071 0.005 0.993 0.031wine_quality H2O 0.161 0.027 0.965 0.12415Table 7: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression testing datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 1.003 1.007 0.948 0.8022dplanes H2O 1.004 1.008 0.948 0.802bank32nh forester 0.08 0.006 0.548 0.053bank32nh H2O 0.076 0.006 0.599 0.05elevators forester 0.002 0 0.884 0.002elevators H2O 0.002 0 0.911 0.001kin8nm forester 0.113 0.013 0.816 0.087kin8nm H2O 0.084 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 7.554 57.195 0.626 5.039Mercedes_Benz_Greener_Manufacturing H2O 7.583 57.598 0.623 5.222pol forester 4.739 22.508 0.987 2.242pol H2O 3.198 10.278 0.994 1.3wine_quality forester 0.614 0.377 0.505 0.451wine_quality H2O 0.604 0.365 0.521 0.43Table 8: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression validation datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.999 0.997 0.948 0.7992dplanes H2O 1 0.999 0.948 0.8bank32nh forester 0.082 0.007 0.544 0.053bank32nh H2O 0.078 0.006 0.591 0.052elevators forester 0.002 0 0.875 0.002elevators H2O 0.002 0 0.907 0.001kin8nm forester 0.111 0.012 0.822 0.085kin8nm H2O 0.083 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 8.464 73.039 0.559 5.261Mercedes_Benz_Greener_Manufacturing H2O 8.458 72.911 0.56 5.373pol forester 4.379 19.256 0.989 1.885pol H2O 3.01 9.087 0.995 1.213wine_quality forester 0.632 0.399 0.478 0.466wine_quality H2O 0.624 0.389 0.492 0.447D Used assetsIn this section we describe the packages used for both forester , and the experiments. The packagesoutside of the forester required for the experiments are listed in the Table 9. Additional requirementfor the catboost andH2O packages is installed Java. The packages required by the forester as wellas their versions used during the experiment are presented in the Table 10.16Table 9: The packages and their versions under which the experiments were executed and supplementalmaterials were created.package version licensexlsx 0.6.5 GPL-3stringr 1.5.0 MITggbeeswarm 0.6.0 GPL (>= 2)dplyr 1.0.10 MITggplot2 3.4.0 MITtictoc 1.1 Apache License (== 2.0)H2O 3.38.0.1 Apache License (== 2.0)forester 1.2.1 GPL-3OpenML 1.12 BSD_3_clauseTable 10: The forester package’s dependencies and their versions used during the experiments.package version licenceBoruta 7.0.0 GPL (>= 2)catboost 1.1.1 Apache License (== 2.0)crayon 1.5.2 MITDALEX 2.4.2 GPLdata.table 1.14.2 MPL-2.0ggplot2 3.4.0 MITggradar 0.2 GPLggrepel 0.9.3 GPL-3knitr 1.40 GPLlightgbm 3.3.2 MITmice 3.14.0 GPL-2 | GPL-3mltools 0.3.5 MITParBayesianOptimization 1.2.4 GPL-2partykit 1.2-16 GPL-2 | GPL-3pROC 1.18.0 GPL (>= 3)ranger 0.14.1 GPL-3rcompanion 2.4.18 GPL-3rmarkdown 2.16 GPL-3splitTools 0.3.2 GPL (>= 2)testthat 3.1.6 MITtibble 3.1.8 MITtinytex 0.43 MITvarhandle 2.0.5 GPL (>= 2)xgboost 1.6.0.1 Apache License (== 2.0)stats 4.1.2 Part of R 4.1.217E Execution times comparisonIn this section we briefly explore the times needed for every experiment execution for both frame-works. The results presented in Table 11, and Table 12 show that final execution times differ, despitesetting exactly the same times for H2O experiment as the forester had. Our empirical results showthat the H2O runs lasted two times longer on average than the forester , which puts a differentlight on the comparison of the frameworks performance. Raw results needed for these tables areavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments/execution_times .Table 11: The comparison of mean execution times in seconds for the forester andH2O for binaryclassification experiments.task_name forester H2O difference relative differencebanknote-authentication 818.33 2521.33 -1703 0.28blood-transfusion-service-center 155.67 555.67 -400 0.26breast-w 451.33 797.33 -346 0.57credit-approval 805 1513 -708 0.53credit-g 2453 4234 -1781 0.58diabetes 1645.67 2643.67 -998 0.62kr-vs-kp 451.33 806.67 -355.33 0.57phoneme 2748.33 3695.33 -947 0.67Table 12: The comparison of mean execution times in seconds for the forester andH2O for regressionexperiments.task_name forester H2O difference relative difference2dplanes 401 1050.67 -649.67 0.38bank32nh 708.67 1214.67 -506 0.58elevators 720.33 1435.33 -715 0.5kin8nm 544.67 1564 -1019.33 0.35Mercedes_Benz_Greener_Manufacturing 848 1371.67 -523.67 0.61pol 756 1548.33 -792.33 0.49wine_quality 1317.33 2130 -812.67 0.63F Package comparisonWe have prepared a notebook showing the differences between the packages described in therelated work section. The document includes a comparison of package installation, a descriptionof available preprocessing, variable selection options, and model tuning. In addition, visual-izations, methods of explainable machine learning, report preparation, and reference to avail-able package documentation are described. We do not give a final assessment of the best pack-age because it could be subjective, but we expose the reader to criticism. Notebook is avail-able in the GitHub repository https://github.com/ModelOriented/forester/blob/main/misc/experiments/framework_comparison.Rmd .18Forester reportversion 1.2.12023-05-20 01:36:36This report contains details about the best trained model, table with metrics for every trained model, scatterplot for chosen metric and info about used data.The best modelsThis is the binary_clf task.The best model is: xgboost_RS_5 .The names of the models were created by a pattern Engine_TuningMethod_Id , where:•Engine describes the engine used for the training (random_forest, xgboost, decision_tree, lightgbm,catboost),•TuningMethod describes how the model was tuned (basic for basic parameters, RS for random search,bayes for Bayesian optimization),•Id for separating the random search parameters sets.More details about the best model are present at the end of the report.no. name accuracy auc f113 xgboost_RS_5 0.7919 0.8088 0.27917 ranger_RS_4 0.7785 0.6965 0.153818 lightgbm_RS_5 0.7785 0.7361 0.42112 xgboost_model 0.7718 0.7090 0.413814 lightgbm_RS_1 0.7718 0.7578 0.37044 ranger_RS_1 0.7651 0.7930 NaN6 ranger_RS_3 0.7651 0.7228 NaN10 xgboost_RS_2 0.7651 0.7801 NaN11 xgboost_RS_3 0.7651 0.7367 NaN16 lightgbm_RS_3 0.7651 0.7690 NaN21 lightgbm_bayes 0.7651 0.7340 0.36368 ranger_RS_5 0.7584 0.7579 0.052612 xgboost_RS_4 0.7517 0.6609 0.372919 ranger_bayes 0.7517 0.7333 0.244920 xgboost_bayes 0.7517 0.7409 0.24491 ranger_model 0.7450 0.7063 0.32143 lightgbm_model 0.7450 0.6842 0.38719 xgboost_RS_1 0.7450 0.6619 0.366715 lightgbm_RS_2 0.7181 0.6058 0.382417 lightgbm_RS_4 0.7181 0.6058 0.38241G Report exampleno. name accuracy auc f15 ranger_RS_2 0.7114 0.6929 0.2712Plots for all modelsxgboost_modellightgbm_RS_1xgboost_RS_5ranger_RS_4lightgbm_RS_500.51Metricaccuracyaucf1Model comparisonPlots for the best model - xgboost_RS_50.000.250.500.751.000.00 0.25 0.50 0.75 1.00specificitysensitivityROC Curve (AUC = 0.8088)6229112010 1TargetPredictionConfusion Matrix2Feature Importance for the best model - xgboost_RS_5xgb.Booster0.880 0.882 0.884 0.886 0.888V4V3V2V1Root mean square error (RMSE) loss after permutationscreated for the xgb.Booster modelFeature ImportanceDetails about data——————– CHECK DATA REPORT ——————–The dataset has 748 observations and 5 columns which names are:V1; V2; V3; V4; Class;With the target value described by a column: Class.No static columns.No duplicate columns.No target values are missing.No predictor values are missing.No issues with dimensionality.Strongly correlated, by Spearman rank, pairs of numerical values are:V2 - V3: 1;These observations migth be outliers due to their numerical columns values:1 10 116 342 496 497 498 499 5 500 501 503 504 505 506 518 529 747 748 ;Dataset is unbalanced with: 3.202247 proportion with 1 being a dominating class.3Columns names suggest that none of them are IDs.Columns data suggest that none of them are IDs.——————– CHECK DATA REPORT END ——————–The best model details------------ Xgboost model ------------Parametersniter: 20evaluation_log:iter : train_auc1 :2 :3 :4 :5 :6 :7 :8 :9 :10 :11 :12 :13 :14 :15 :16 :17 :18 :19 :20 :4
hBXCnJ2MzO
XHIY3cQ8Tew
automl.cc/AutoML/2023/ABCD_Track
2023
AutoGluon–TimeSeries: AutoML for Probabilistic Time Series Forecasting
["Oleksandr Shchur", "Ali Caner Turkmen", "Nick Erickson", "Huibin Shen", "Alexander Shirkov", "Tony Hu", "Bernie Wang"]
We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic time series forecasting. Focused on ease of use and robustness, AutoGluon–TimeSeries enables users to generate accurate point and quantile forecasts with just 3 lines of Python code. Built on the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles of diverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning based forecasting approaches, and ensembling techniques. In our evaluation on 29 benchmark datasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforming a range of forecasting methods in terms of both point and quantile forecast accuracy, and often even improving upon the best-in-hindsight combination of prior methods.
["AutoML", "forecasting", "time series", "probabilistic forecasting"]
AutoGluon–TimeSeries:AutoML for Probabilistic Time Series ForecastingOleksandr Shchur1Caner Turkmen1Nick Erickson1Huibin Shen2Alexander Shirkov1Tony Hu1Yuyang Wang21Amazon Web Services2AWS AI LabsAbstract We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic timeseries forecasting.1Focused on ease of use and robustness, AutoGluon–TimeSeries enablesusers to generate accurate point and quantile forecasts with just 3 lines of Python code. Builton the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles ofdiverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning basedforecasting approaches, and ensembling techniques. In our evaluation on 29 benchmarkdatasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforminga range of forecasting methods in terms of both point and quantile forecast accuracy, andoften even improving upon the best-in-hindsight combination of prior methods.1 IntroductionTime series (TS) forecasting is a fundamental statistical problem with applications in diversedomains such as inventory planning (Syntetos et al., 2009), smart grids (Hong et al., 2020), andepidemiology (Nikolopoulos et al., 2021). Decades of research led to development of variousforecasting approaches, from simple statistical models (Hyndman and Athanasopoulos, 2018) toexpressive deep-learning-based architectures (Benidis et al., 2022). Despite the availability of variousforecasting approaches, practitioners often struggle with selecting the most appropriate methodand adhering to best practices when implementing and evaluating forecasting pipelines.AutoML aims to mitigate these challenges by providing tools that enable practitioners to developaccurate and efficient predictive models without extensive domain knowledge. While traditionalAutoML methods have focused primarily on classification and regression tasks for tabular data(Thornton et al., 2013; Feurer et al., 2015; Olson and Moore, 2016; Erickson et al., 2020; LeDell andPoirier, 2020; Zimmer et al., 2021), automated time series forecasting has received comparativelyless attention, with only a few open-source AutoML forecasting frameworks having been proposed(Deng et al., 2022; Catlin, 2022). Furthermore, existing automated forecasting frameworks tend togenerate point forecasts without considering uncertainty, which is a crucial factor in many practicalapplications (Gneiting and Katzfuss, 2014).To close this gap, we introduce AutoGluon–TimeSeries (AG–TS), an open-source AutoML frame-work for probabilistic time series forecasting written in Python. AG–TS can generate both pointand probabilistic forecasts for collections of univariate time series. Together with support for staticand time-varying covariates, this makes AG–TS applicable to most real-world forecasting tasks.As part of the AutoGluon framework (Erickson et al., 2020; Shi et al., 2021), AG–TS adheres tothe principles of ease of use and robustness, empowering users with limited expertise in the targetdomain to generate highly accurate predictions with minimal coding effort. The architecture is1https://github.com/autogluon/autogluonAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Point forecast (left) and quantile forecast (right) for a univariate time series.capable of handling failures of individual models when necessary, producing a valid result as longas any single model was trained successfully.We evaluate the performance of AG–TS against other established forecasting methods andAutoML systems using 29 publicly available benchmark datasets. The results demonstrate AG–TS’s strong performance, outperforming various competing approaches in terms of both pointand probabilistic forecast accuracy. This highlights the potential of AG–TS as a valuable tool forpractitioners and researchers seeking an automated and versatile solution for time series forecasting.2 Probabilistic Time Series ForecastingThe probabilistic time series forecasting problem can be formally stated as follows. The dataD={yi,1:Ti}Ni=1is a collection of Nunivariate time series, where yi,1:Ti=(yi,1,...,yi,T i),yi,tis thevalue of the i-th time series at time t, andTiis the length of the i-th time series.2For example,yi,tmay correspond to the number of units of product isold on day t. The goal of time seriesforecasting is to predict the future Hvalues for each time series in D. The parameter His knownasprediction length orforecast horizon .Each time series yi,1:Tmay additionally be associated with covariates Xi,1:T+H. These includeboth static covariates (e.g., location of the store, product ID) and time-varying covariates . Thetime-varying covariates may, in turn, be known in the future (e.g., day of the week, promotions) oronly known in the past (e.g., weather, sales of other products).In the most general form, the goal of probabilistic forecasting is to model the conditionaldistribution of the future time series values yi,T+1:T+Hgiven the past values yi,1:Tand the relatedcovariates Xi,1:T+Hp(yi,T+1:T+H|yi,1:T,Xi,1:T+H).In practice, we are rarely interested in the full predictive distribution and rather represent therange of possible outcomes with quantile forecasts ˆyqi,T+1:T+Hfor chosen quantile levels q∈(0,1).The quantile forecast implies that the future time series value yi,T+his predicted to exceed ˆyqi,T+hwith probability q(Wen et al., 2017; Lim et al., 2021).If the uncertainty is of no interest, we can instead report a point forecast of the future timeseries values. For example, we can summarize the prediction using the conditional meanˆyi,T+1:T+H=Ep[yi,T+1:T+H|yi,1:T,Xi,1:T+H].Figure 1 demonstrates the difference between a point forecast and a quantile forecast. Finally, notethat here we consider the problem of forecasting multiple univariate time series, also known aspanel data, which is different from multivariate forecasting (Benidis et al., 2022).2To reduce clutter in notation, we assume that all time series have the same length T(even though AG–TS supportsthe case when time series have different lengths).23 AutoGluon–TimeSeriesAutoGluon–TimeSeries enables users to generate probabilistic time series forecasts in a few linesof code, as shown by the following minimal example.1from autogluon . timeseries import TimeSeriesDataFrame , TimeSeriesPredictor23train_data = TimeSeriesDataFrame . from_path (" train . csv ")4predictor = TimeSeriesPredictor ( prediction_length =30) . fit ( train_data )5predictions = predictor . predict ( train_data ) # forecast next 30 time stepsLoading the data. ATimeSeriesDataFrame object stores a collection of univariate time series andprovides utilities such as loading data from disk and train-test splitting. Internally, time series datais represented as a pandas.DataFrame (pandas development team, 2020) in long format (Table 1),but loaders are also available for other formats. Besides the target time series that need to beforecast, TimeSeriesDataFrame can also store the static and time-varying covariates.Table 1: Collection of univariate time series stored as a TimeSeriesDataFrame . Each row containsunique ID of the time series, timestamp, and the value of the target time series.item_id timestamp targetT1 2020-03-02 23T1 2020-03-03 43·········T999 2020-08-29 15T999 2020-08-31 27Defining the task. Users can specify the forecasting task by creating a TimeSeriesPredictorobject. Task definition includes information such as prediction length , list of quantile levels tobe predicted, and the evaluation metric . The evaluation metric should be chosen based on thedownstream application. For example, mean weighted quantile loss (wQL) measures the accuracy ofquantile forecasts, and mean absolute scaled error (MASE) reports the accuracy of the point forecastrelative to a naive baseline. When creating the predictor, users can also specify what time-varyingcovariates are known in the future—the remainder will be treated as past-only covariates.Fitting the predictor. Inside the fit() method, the predictor preprocesses the data, fits andevaluates various models using cross-validation, optionally performs hyperparameter optimization(HPO) on selected models, and trains an ensemble of the individual forecasting models. By default,AG–TS provides user-friendly presets users can choose from to manage the training time–accuracytradeoff. Advanced users can also explicitly specify the models to use and their hyperparameters,or specify search spaces in which optimal hyperparameters will be searched.Making predictions. After the predictor has been fit, the predict() method can be used to generatepredictions on new data—including time series that haven’t been seen during training. Like theinput data, the predictions are stored in a long-format data frame, where the columns contain themean (expected value) and quantile forecasts at the desired quantile levels (Table 2).Documentation. We provide various additional resources on the official website auto.gluon.ai.These include installation instructions, tutorials, and a cheatsheet summarizing the main features.3.1 Design ConsiderationsAG–TS was launched as a part of the AutoGluon suite (Erickson et al., 2020) in v0.5, building onthe foundation of AutoGluon and borrowing some design elements from other forecasting librarieslike GluonTS (Alexandrov et al., 2020). Since then, AG–TS has evolved into a full solution for timeseries forecasting. Below, we highlight some of AG–TS’s key design principles.3Table 2: Mean and quantile forecasts generated by a TimeSeriesPredictor . The forecasts include thenext prediction_length many time steps of each time series in the dataset.item_id timestamp mean 0.1 0.5 0.9T1 2020-09-01 17 10 16 23T1 2020-09-02 25 15 23 31··················T999 2020-09-29 33 21 33 36T999 2020-09-30 30 24 28 34Ensembles over HPO. AG–TS follows the AutoGluon philosophy, relying on ensembling techniquesinstead of HPO or neural architecture search. The library features a broad selection of modelswhose probabilistic forecasts are combined in an ensemble selection step (Caruana et al., 2004).AG–TS favors broadening the portfolio of forecasters over exploring the hyperparameter space ofany particular model. While AG–TS does support HPO techniques, HPO is excluded from mostpreset configurations to reduce training time and minimize overfitting on the validation data.Presets and default hyperparameters. In order to provide defaults that work well out of the box forusers that are not familiar with forecasting, AG–TS includes various presets —high-level configura-tion options that allow users to trade off between fast training and higher accuracy. AG–TS followsthe convention-over-configuration principle: all models feature default configurations of hyperpa-rameters that are expected to work well given the selected preset. At the same time, advanced usershave an option to manually configure individual models and use the TimeSeriesPredictor as aunified API for training, evaluating and combining various forecasting models (see documentationfor details).Model selection. Time series forecasting introduces unique challenges in model validation andselection. Importantly, as the main aim of the model is to generalize into the future , special carehas to be taken to define validation sets that are held out across time . The AG–TS API is designedwith this consideration. If the user does not explicitly specify a validation set, the library holds thewindow with last prediction_length time steps of each time series as a validation set. Optionally,multiple windows can be used to perform so-called backtesting .3.2 Forecasting ModelsThere are two families of approaches to forecasting in large panels of time series. The first approachis to fit local classical parametric statistical models to each individual time series. A second approachis built on expressive machine-learning-based approaches that are fit globally on all time series atonce. AG–TS features both approaches, incorporating forecasting models from both families andcombining them in an ensemble.Local models. This category contains conventional methods that capture simple patterns liketrend and seasonality. Examples include ARIMA (Box et al., 1970), Theta (Assimakopoulos andNikolopoulos, 2000) and ETS(Hyndman et al., 2008), as well as simple baselines like Seasonal Naive(Hyndman and Athanasopoulos, 2018). AG–TS relies on implementations of these provided byStatsForecast (Garza et al., 2022).The defining characteristic of local models is that a separate model is fit to each individualtime series in the dataset (Januschowski et al., 2020). This means that local models need to be re-fitwhen making predictions for new time series not seen during training. To mitigate this limitation,AG–TS caches the model predictions and parallelizes their fitting across CPU cores using Joblib(Joblib Development Team, 2020).4Global models. Unlike local models, a single global model is fitted to the entire dataset and usedto make predictions for all time series. Global models used by AG–TS can be subdivided intotwo categories: deep learning and tabular models. Deep-learning models such as DeepAR (Salinaset al., 2020), PatchTST (Nie et al., 2023), and Temporal Fusion Transformer (Lim et al., 2021) useneural networks to generate probabilistic forecasts for future data. AG–TS uses PyTorch-baseddeep learning models from GluonTS (Alexandrov et al., 2020). Tabular models like LightGBM (Keet al., 2017) operate by first converting the time series forecasting task into a tabular regressionproblem. This can be done either recursively —by predicting future time series values one at atime—or by directly forecasting all future values simultaneously (Januschowski et al., 2022). AG–TSrelies on regression models provided by AutoGluon–Tabular and uses MLForecast (Nixtla, 2023)for converting them into tabular forecasters.Global models typically provide faster inference compared to local models, since there isno need for re-training at prediction time. This, however, comes at the cost of longer trainingtimes since more parameters need to be estimated. Global models also naturally handle varioustypes of covariates and utilize information present across different time series, which is known ascross-learning (Semenoglou et al., 2021).Ensembling. After AG–TS finishes sequentially fitting the individual models, they are combinedusing 100 steps of the forward selection algorithm (Caruana et al., 2004). The output of the ensembleis a convex combination of the model predictions:ˆyensemblei,T+1:T+H=M∑︁m=1wm·ˆy(m)i,T+1:T+Hsubject towm≥0,M∑︁m=1wm=1,where ˆy(m)i,T+1:T+Hare either point or quantile forecasts generated by each of the Mtrained models.Note that in case of probabilistic forecasting, the ensemble computes a weighted average of thequantile forecasts of the individual models—method known as Vincentization (Ratcliff, 1979).The ensemble weights wmare tuned to optimize the chosen evaluation metric (e.g., wQL,MASE) on the out-of-fold predictions generated using time series cross-validation (Hyndman andAthanasopoulos, 2018). The main advantages of the forward selection algorithm are its simplicity,compatibility with arbitrary evaluation metrics, and the sparsity of the final ensemble.4 Related workTime series forecasting is a challenging task, and the idea of automated forecasting has long intriguedstatistics and ML researchers. An early influential work on automated forecasting was the Rpackageforecast (Hyndman and Khandakar, 2008) that introduced the AutoETS and AutoARIMA models.These models automatically tune their parameters (e.g., trend, seasonality) for each individual timeseries using an in-sample information criterion.The following decade saw the growing focus on deep learning models for time series (Benidiset al., 2022; Wen et al., 2017; Salinas et al., 2020; Lim et al., 2021; Oreshkin et al., 2020). Several workshave explored how such neural-network-based models can be combined with AutoML techniques togenerate automated forecasting solutions (Van Kuppevelt et al., 2020; Shah et al., 2021; Javeri et al.,2021). Another line of research focused on optimizing the entire forecasting pipeline—includingdata preprocessing and feature engineering—not just hyperparameter tuning for individual models(Dahl, 2020; Kurian et al., 2021; da Silva et al., 2022). A recent survey by Meisenbacher et al. (2022)provides an overview of such automated pipelines.Even though AutoML for forecasting is becoming an active research topic, few of the recentdevelopments have found their way from academic papers to software packages. Available open-source AutoML forecasting libraries include AutoPyTorch–Forecasting (Deng et al., 2022), AutoTS(Catlin, 2022) and PyCaret (Ali, 2020). In contrast to these frameworks, AG–TS supports probabilisticforecasting and focuses on ease of use, allowing users to generate forecasts in a few lines of code.55 Experiments5.1 SetupThe goal of our experiments is to evaluate the point and probabilistic forecast accuracy of AG–TS.As baselines, we use various statistical and ML-based forecasting methods.Baseline methods. AutoARIMA ,AutoETS , and AutoTheta are established statistical forecastingmodels that automatically tune model parameters for each time series individually based on aninformation criterion (Hyndman et al., 2008). This means, such models do not require a validationset and use in-sample statistics for model tuning. StatEnsemble is defined by taking the median ofthe predictions of the three statistical models. Such statistical ensembles, despite their simplicity,have been shown to achieve competitive results in forecasting competitions (Makridakis et al.,2018). We use Python implementations of all these methods provided by the StatsForecast library(Garza et al., 2022). We additionally use Seasonal Naive as a sanity-check baseline that all othermethods are compared against (Hyndman and Athanasopoulos, 2018).For ML-based methods, we include two established deep learning forecasting models, DeepAR(Salinas et al., 2020) and Temporal Fusion Transformer (TFT) (Lim et al., 2021). We use the PyTorchimplementations of these models provided by GluonTS (Alexandrov et al., 2020). Finally, we includethe AutoML forecasting framework AutoPyTorch–Forecasting (Deng et al., 2022) to our comparison.AutoPyTorch builds deep learning forecasting models by combining neural architecture search (e.g.,by trying various encoder modules) and hyperparameter optimization (e.g., by tuning the learningrate). The search process is powered by a combination of Bayesian and multi-fidelity optimization.Similar to AutoGluon, the models are combined using ensemble selection (Caruana et al., 2004).Datasets. In our evaluation we use 29 publicly available forecasting benchmark datasets providedvia GluonTS. These include datasets from the Monash Forecasting Repository (Godahewa et al.,2021), such as the M1, M3 and M4 competition data (Makridakis and Hibon, 2000; Makridakis et al.,2018). We selected the datasets from the Monash Repository that contain more than a single timeseries and fewer than 15M total time steps. Our selection of datasets covers various scenarios thatcan be encountered in practice—from small datasets (M1 and M3), to datasets with a few long timeseries (Electricity, Pedestrian Counts) and large collections of medium-sized time series (M4). Acomprehensive list of dataset statistics are provided in Table 8 in the appendix.Configuration. We train the TimeSeriesPredictor from AG–TS with best_quality presets, asthese are designed to produce the most accurate forecasts, and set the time_limit to 4 hours. Notethat the presets were fixed a priori and not optimized using the benchmark datasets. DeepAR andTFT are also trained for up to 4 hours with early stopping on validation loss with patience set to200. For these models, the model checkpoint achieving the best validation loss is used to generatethe test predictions. The time limit for AutoPyTorch is similarly set to 4 hours. We set no time limitfor the remaining statistical models, as they do not support such functionality. In case the runtimeof a single experiment exceeds 6 hours, the job is interrupted and the result is marked as failure.More details about the configuration are available in Appendix A.3.All models are trained using AWS m6i.4xlarge cloud instances (16 vCPU cores, 64 GB RAM). Weuse CPU instances to fairly evaluate the CPU-only baselines, though AG–TS additionally supportsGPU training. Each run is repeated 5 times using different random seeds for non-deterministicmodels. We run all experiments using AutoMLBenchmark (Gijsbers et al., 2022). In the supplement,we provide full configuration details and the scripts for reproducing all experiments.5.2 Forecasting AccuracyWe measure the accuracy of the point forecasts by reporting the mean absolute scaled error(MASE) of all forecasting methods on all benchmark datasets. AG–TS and AutoPyTorch are trained6Table 3: Point forecast accuracy comparison of baseline methods with AutoGluon (based on the MASEmetric) on 29 datasets. Listed are the number datasets where each method produced: lowererror than AutoGluon (Wins), higher error (Losses), error within 0.001 (Ties), error duringprediction (Failures), or the lowest error among all methods (Champion). Average rank andaverage error are computed using the datasets where no method failed. We rescale the errorsfor each dataset between [0,1]to ensure that averaging is meaningful. The final columnreports the win rate versus the Seasonal Naive baseline. Individual results are given in Table 9.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (MASE) - - - 0 19 2.08 0.073 100.0%StatEnsemble 6 20 0 3 3 3.12 0.238 82.8 %AutoPyTorch (MASE) 4 25 0 0 2 4.12 0.257 93.1%AutoETS 4 25 0 0 1 4.64 0.374 75.9 %AutoTheta 4 23 0 2 0 4.92 0.427 72.4 %DeepAR 4 24 0 1 2 5.08 0.434 93.1 %AutoARIMA 4 22 0 3 1 5.92 0.612 79.3 %TFT 2 27 0 0 1 6.12 0.635 75.9 %Table 4: Probabilistic forecast accuracy comparison of each baseline method with AutoGluon (based onthe wQL metric) on 29 datasets. The columns are defined as in Table 3. Results for individualmodels and datasets are given in Table 10.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (wQL) - - - 0 19 1.80 0.086 100.0%StatEnsemble 3 23 0 3 0 3.36 0.330 86.2%DeepAR 5 23 0 1 1 4.08 0.455 89.7%TFT 5 24 0 0 5 4.24 0.487 89.7%AutoETS 3 26 0 0 2 4.40 0.489 69.0%AutoTheta 2 25 0 2 1 5.00 0.545 69.0%AutoARIMA 4 22 0 3 1 5.12 0.641 82.8%to optimize the MASE metric, while all other models are trained using their normal trainingprocedure. We report the aggregate statistics in Table 3, and provide the full results for individualmodels and datasets in Table 9 in the appendix.We measure the accuracy of the probabilistic (quantile) forecasts by reporting the meanweighted quantile loss (wQL) averaged over 9 quantile levels q∈{0.1,0.2,...,0.9}. AG–TS isconfigured to optimize the wQL metric. We exclude AutoPyTorch from this comparison since thisframework does not support probabilistic forecasting. We report the aggregate statistics in Table 4,and provide the full results for individual models and datasets in Table 10 in the appendix.Some of the frameworks failed to generate forecasts on certain datasets. AutoARIMA, AutoThetaand StatEnsemble did not finish training on some datasets (Electricity–Hourly, KDD Cup 2018,and Pedestrian Counts) within 6 hours. This is caused by the poor scaling of these models to verylong time series. DeepAR model fails on one dataset (Web Traffic Weekly) due to numerical errorsencountered during training.Discussion. The results demonstrate that AG–TS outperforms all other frameworks, achieving thebest average rank and rescaled error for both point and probabilistic forecasts, and even beatingthe best-in-hindsight competing method on 19 out of 29 datasets.StatEnsemble places second after AG–TS. The statistical ensemble performs especially well onsmall datasets such as M1 and M3. This demonstrates that in the low-data regime simple approaches,7Figure 2: Total runtime of each framework across all datasets. AutoGluon always completes trainingand prediction under the time limit and achieves a mean runtime of 33 minutes. AutoPyTorchis always trained for the full 4 hour time limit. Statistical models train faster in most cases,but may take an extremely long time to train on datasets with long time series. The runtimesfor individual models and datasets are provided in Table 11.like ensembling by taking the median, may perform better than the learned ensemble selectionstrategy employed by both AutoML frameworks.AutoPyTorch achieves similar performance to StatEnsemble in point forecasting across mostperformance indicators. Interestingly, AG–TS tends to outperform AutoPyTorch on larger datasetslike M4. This means that AG–TS’s strategy of training various light-weight models performs wellin this setting under the limited time budget. Also note, configuring AutoPyTorch requires morecode and domain knowledge, compared to the 3 lines of code necessary to reproduce the aboveresults by AG–TS.Deep learning models DeepAR and TFT perform well in terms of probabilistic forecasting, butfall behind simple statistical approaches in point forecasts. This makes sense, since the objectivefunctions optimized by these deep learning models are designed for probabilistic forecasting.5.3 Runtime ComparisonHigh accuracy is not the only important property of an AutoML system—the ability to generatepredictions in a reasonable amount of time is often necessary in practice. To evaluate the efficiency ofAG–TS, we compare its runtime with other frameworks. We visualize the runtime of each frameworkacross all datasets in Figure 2. Note that here we compare the total runtime defined as the sumof training and prediction times. This reflects the typical forecasting workflow in practice, wherethe forecast is generated once for each time series. Moreover, it’s hard to distinguish between thetraining and prediction time for local models, where a new model is trained for each new time series.AG–TS completes training and prediction under the 4-hour time limit for all 29 datasets, andachieves mean runtime of 33 minutes. While statistical models are faster on average, they can beextremely slow to train on datasets consisting of long time series. For instance, the runtimes ofAutoARIMA, AutoTheta and StatEnsemble exceed 6 hours for 3 datasets with long time series. Thedeep learning models DeepAR and TFT have higher median runtime compared to the statisticalmodels, but never reach the 4 hour time limit due to early stopping. Finally, AutoPyTorch alwaysconsumes the entire 4 hour time budget due to its design.To summarize, AG–TS is able to produce accurate forecasts under mild time budgets. While, onaverage, AG–TS takes more time than the individual models, it produces more accurate forecastsand avoids the extremely long runtimes sometimes exhibited by local models. The results alsodemonstrate that limited training time is better spent training and ensembling many diverse models(as done by AG–TS), rather than hyperparameter tuning a restricted set of models (as done byAutoPyTorch).8Table 5: Ablation study. We compare the point forecast accuracy of AutoGluon, where certain compo-nent models are removed, ensembling is disabled, or the time limit is reduced. All versionsexcept AutoGluon-1h and AutoGluon-10m are trained for 4 hours. The columns are definedand the scores are computed as in Table 3.Framework Champion Average rank Average rescaled errorAutoGluon-1h 19 2.04 0.070AutoGluon-4h 19 2.08 0.073NoStatModels 16 2.12 0.094NoTabularModels 15 2.12 0.085NoDeepModels 15 2.28 0.124AutoGluon-10m 14 2.50 0.099NoEnsemble 7 3.52 0.1775.4 AblationsFinally, we perform ablations to understand the effect of different components on the final perfor-mance. We compare the point forecast accuracy of the TimeSeriesPredictor trained for 4 hourswith MASE evalauation metric (Section 5.2) against several variations with certain disabled com-ponents. First, we exclude some base models from the presets: statistical models ( NoStatModels ),deep learning models ( NoDeepModels ), and tabular models ( NoTabularModels ). We also considerreducing the time limit to 1 hour ( AutoGluon-1h ) or 10 minutes ( AutoGluon-10m ), as well disablingthe final ensembling step ( NoEnsemble ). In the latter case, AG–TS predicts using the model withthe best validation score. The rest of the setup is identical to Section 5.2.Table 5 shows the metrics for the different model variations, each compared to the baselinesfrom Section 5.2. AutoGluon-4h and AutoGluon-1h produce nearly identical results. This isnot surprising, as the 4-hour version finishes training under 1 hour for most datasets (Figure 2).Interestingly, AutoGluon achieves strong results even with a 10-minute time limit, achieving thebest average rank and outperforming the best-in-hindsight model on 14 out of 29 datasets.Removing the ensembling step has the most detrimental effect on the overall accuracy. Thishighlights the importance of ensembling, confirming the findings of other works (Makridakis et al.,2018; Borchert et al., 2022). The ablations also show that all 3 classes of models used by AutoGluonare important for the overall performance, deep learning models being the most critical component.6 Future WorkOur experiments demonstrate the strong forecasting accuracy achieved by AG–TS. Despite theseencouraging initial results, we aim to continue developing the library, adding new functionalityand further boost the forecasting performance. This includes incorporating the various ideas in thespace of AutoML for forecasting (Meisenbacher et al., 2022), with focus on the following directions.Ensembling. Advanced ensembling strategies, such as stacking (Ting and Witten, 1997), lie at thecore of modern high-performing AutoML systems (Erickson et al., 2020). How to best generalizethese techniques to probabilistic forecasting is an active, but still open research question (Gastingeret al., 2021; Wang et al., 2022).Calibration. Many practical tasks require guarantees on the uncertainty estimates associated withthe forecasts. Conformal prediction methods (Stankeviciute et al., 2021; Xu and Xie, 2021) provideone way to obtain such guarantees, and we plan to incorporate them into AG–TS in the future.New problem types. AG–TS supports the most common types of forecasting tasks, such as proba-bilistic forecasting or handling covariates. However, there are several settings that are currently (as9of v0.8) not supported. These include so-called cold-start forecasting (where little historic data isavailable) and generating forecast explanations (Rojat et al., 2021). Another interesting potentialapplication for AG–TS is assisting judgemental forecasting. In this context, AG–TS could serve as a“tool” queried by a large language model (LLM) (Schick et al., 2023) to generate qualitative forecasts.More generally, combinations of LLM with AutoML frameworks are an exciting direction for futurework (Tornede et al., 2023).Scalability. In our experiments we consider datasets with up to ≈107time steps across all time series.Modern applications, however, sometimes require operating on even larger scales. This wouldrequire improving efficiency of existing models and developing new efficient AutoML techniques.7 ConclusionsIn this work, we introduced AutoGluon–TimeSeries, a powerful and user-friendly open-sourceAutoML library for probabilistic time series forecasting. By combining statistical models and deeplearning forecasting approaches with ensembling techniques, AutoGluon–TimeSeries is able toachieve strong empirical results on a range of benchmark datasets. With the ability to generateaccurate point and quantile forecasts with just 3 lines of Python code, this framework is poised tomake time series forecasting more accessible and efficient for a wide range of users.8 Broader Impact StatementAutoGluon–TimeSeries enables users to generate accurate forecasts in a few lines of code. Thisdemocratizes machine learning, lowering the barrier to entry to forecasting for non-experts. Atthe same time, AutoGluon–TimeSeries can be used by experienced users to design highly accurateforecasting pipelines. More accurate forecasts can directly translate to real-world impact in variousdomains. For example, forecasting renewable energy generation is a crucial component of smartgrid management (Tripathy and Prusty, 2021); accurately predicting demand leads to more efficientinventory management and increased revenue (Makridakis et al., 2022).The potential negative impacts of the proposed approach are similar to those of other forecastingmodels. One such danger arises when the limitations of forecasting methods are not taken intoaccount in the context of decision making (e.g., when guiding policy decisions). As forecastingmodels only capture statistical dependencies, they may be misleading when trying to estimateeffects of actions or interventions.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] All claims are supported by the experimental evaluation inSection 5.(b) Did you describe the limitations of your work? [Yes] See Section 6.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 8.(d)Have you read the ethics author’s and review guidelines and ensured that your paper con-forms to them? https://automl.cc/ethics-accessibility/ [Yes] The paper conformsto the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] The paper containsno theoretical results.10(b)Did you include complete proofs of all theoretical results? [N/A] The paper contains notheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] All of the above included in the supplementary material.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] Results are provided in CSV format.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [No]We provide the raw data and describe the procedure in the paper, which should makereproducing the results and figures straightforward.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] The code is properly documented and we made surethat it can be executed in a fresh environment.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes] We use the standard evaluationprotocol: For all datasets, the last prediction_length time steps of each time series areheld out and used to evaluate the forecasts produced by each method. For hyperparameters,see Section A.3.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We carefully made sure that this is the case.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 5.4.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Allmethods use an identical evaluation protocol.(i)Did you compare performance over time? [Yes] We allocate the same runtime budget of 4hours to all methods. An ablation study is performed where the time limit is reduced to 1hour and 10 minutes for AutoGluon.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]For all non-deterministic methods, the experiments are repeated with five random seeds:1,2,3,4,5 .(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] Error metrics produced by all non-deterministic methods include themean and the standard deviation (see Tables 9 and 10).(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [No] These are notavailable for probabilistic time series forecasting.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] The compute infrastructure is describedin Section 5.1. The total runtime of all experiments equals approximately 6000 hours ( ≈#models×# seeds×# of datasets).11(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] We describe the hyperparameter settingsin Appendix A.3, in addition to providing the code that can be used to reproduce the results.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] References for all useddatasets and methods are provided in Section 5.1.(b)Did you mention the license of the assets? [Yes] This paper does not introduce any newpublic assets. The AutoGluon library is released under the Apache 2.0 License.(c)Did you include any new assets either in the supplemental material or as a url? [No] Thispaper does not introduce any new public assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The evaluation was performed using public benchmark datasets.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The evaluation was performed using publicbenchmark datasets.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not use crowdsourcing or conduct research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.ReferencesAlexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T.,Maddix, D. C., Rangapuram, S., Salinas, D., Schulz, J., et al. (2020). GluonTS: Probabilistic andneural time series modeling in Python. The Journal of Machine Learning Research , 21(1):4629–4634.Ali, M. (2020). PyCaret: An open source, low-code machine learning library in Python. https://www.pycaret.org .Assimakopoulos, V. and Nikolopoulos, K. (2000). The Theta model: A decomposition approach toforecasting. International journal of forecasting , 16(4):521–530.Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y., Maddix, D., Turkmen, C., Gasthaus, J.,Bohlke-Schneider, M., Salinas, D., Stella, L., et al. (2022). Deep learning for time series forecasting:Tutorial and literature survey. ACM Computing Surveys , 55(6):1–36.Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., and Günnemann, S. (2022). Multi-objectivemodel selection for time series forecasting. arXiv preprint arXiv:2202.08485 .Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. (1970). Time series analysis: forecastingand control . John Wiley & Sons.12Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from librariesof models. In Proceedings of the twenty-first international conference on Machine learning , page 18.Catlin, C. (2022). AutoTS: Automated time series forecasting. https://github.com/winedarksea/AutoTS .da Silva, F. R., Vieira, A. B., Bernardino, H. S., Alencar, V. A., Pessamilio, L. R., and Barbosa, H.J. C. (2022). Automated machine learning for time series prediction. In 2022 IEEE Congress onEvolutionary Computation (CEC) , pages 1–7. IEEE.Dahl, S. M. J. (2020). TSPO: an autoML approach to time series forecasting . PhD thesis.Deng, D., Karl, F., Hutter, F., Bischl, B., and Lindauer, M. (2022). Efficient automated deep learningfor time series forecasting. In Machine Learning and Knowledge Discovery in Databases: EuropeanConference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III , pages664–680. Springer.Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and accurate AutoML for structured data. arXiv preprint arXiv:2003.06505 .Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. Advances in neural information processing systems , 28.Garza, F., Mergenthaler Canseco, M., Challu, C., and Olivares, K. G. (2022). StatsForecast: Light-ning fast forecasting with statistical and econometric models. https://github.com/Nixtla/statsforecast (v1.15.0).Gastinger, J., Nicolas, S., Stepić, D., Schmidt, M., and Schülke, A. (2021). A study on ensemblelearning for time series forecasting and the need for meta-learning. In 2021 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE.Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J.(2022). AMLB: An AutoML benchmark. arXiv preprint arXiv:2207.12560 .Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and ItsApplication , 1:125–151.Godahewa, R., Bergmeir, C., Webb, G. I., Hyndman, R. J., and Montero-Manso, P. (2021). Monashtime series forecasting archive. In Neural Information Processing Systems Track on Datasets andBenchmarks .Hong, T., Pinson, P., Wang, Y., Weron, R., Yang, D., and Zareipour, H. (2020). Energy forecasting: Areview and outlook. IEEE Open Access Journal of Power and Energy , 7:376–388.Hyndman, R., Koehler, A. B., Ord, J. K., and Snyder, R. D. (2008). Forecasting with exponentialsmoothing: the state space approach . Springer Science & Business Media.Hyndman, R. J. and Athanasopoulos, G. (2018). Forecasting: principles and practice . OTexts.Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: the forecast packagefor R. Journal of statistical software , 27:1–22.Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., and Callot,L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting ,36(1):167–177.13Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H., and Gasthaus, J. (2022). Forecastingwith trees. International Journal of Forecasting , 38(4):1473–1481.Javeri, I. Y., Toutiaee, M., Arpinar, I. B., Miller, J. A., and Miller, T. W. (2021). Improving neuralnetworks for time-series forecasting using data augmentation and AutoML. In 2021 IEEE SeventhInternational Conference on Big Data Computing Service and Applications (BigDataService) , pages1–8. IEEE.Joblib Development Team (2020). Joblib: Running Python functions as pipeline jobs. https://joblib.readthedocs.io/ (v1.2.0).Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). Lightgbm:A highly efficient gradient boosting decision tree. Advances in Neural Information ProcessingSystems , 30.Kurian, J. J., Dix, M., Amihai, I., Ceusters, G., and Prabhune, A. (2021). BOAT: A Bayesian optimiza-tion autoML time-series framework for industrial applications. In 2021 IEEE Seventh InternationalConference on Big Data Computing Service and Applications (BigDataService) , pages 17–24. IEEE.LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable automatic machine learning. In Proceedingsof the AutoML Workshop at ICML , volume 2020.Lim, B., Arık, S. Ö., Loeff, N., and Pfister, T. (2021). Temporal fusion transformers for interpretablemulti-horizon time series forecasting. International Journal of Forecasting , 37(4):1748–1764.Makridakis, S. and Hibon, M. (2000). The M3 competition: Results, conclusions and implications.International journal of forecasting , 16(4):451–476.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 competition: Results, findings,conclusion and way forward. International Journal of Forecasting , 34(4):802–808.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2022). The M5 competition: Background,organization, and implementation. International Journal of Forecasting , 38(4):1325–1336.Meisenbacher, S., Turowski, M., Phipps, K., Rätz, M., Müller, D., Hagenmeyer, V., and Mikut, R.(2022). Review of automated time series forecasting pipelines. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery , 12(6):e1475.Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. (2023). A time series is worth 64 words:Long-term forecasting with transformers. International Conference on Learning Representations .Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., and Vasilakis, C. (2021). Forecasting andplanning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmen-tal decisions. European journal of operational research , 290(1):99–115.Nixtla (2023). MLForecast scalable machine learning for time series forecasting. v0.7.2.Olson, R. S. and Moore, J. H. (2016). TPOT: A tree-based pipeline optimization tool for automatingmachine learning. In Workshop on automatic machine learning , pages 66–74. PMLR.Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. (2020). N-beats: Neural basis expansionanalysis for interpretable time series forecasting.pandas development team (2020). pandas-dev/pandas: Pandas. https://doi.org/10.5281/zenodo.3509134 (v1.5.3).14Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics.Psychological bulletin , 86(3):446.Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and Díaz-Rodríguez, N. (2021). Explainableartificial intelligence (XAI) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 .Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecastingwith autoregressive recurrent networks. International Journal of Forecasting , 36(3):1181–1191.Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., andScialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprintarXiv:2302.04761 .Semenoglou, A.-A., Spiliotis, E., Makridakis, S., and Assimakopoulos, V. (2021). Investigating theaccuracy of cross-learning time series forecasting methods. International Journal of Forecasting ,37(3):1072–1084.Shah, S. Y., Patel, D., Vu, L., Dang, X.-H., Chen, B., Kirchner, P., Samulowitz, H., Wood, D., Bramble,G., Gifford, W. M., et al. (2021). AutoAI-TS: AutoAI for time series forecasting. In Proceedings ofthe 2021 International Conference on Management of Data , pages 2584–2596.Shi, X., Mueller, J., Erickson, N., Li, M., and Smola, A. (2021). Multimodal AutoML on structuredtables with text fields. In 8th ICML Workshop on Automated Machine Learning (AutoML) .Stankeviciute, K., M Alaa, A., and van der Schaar, M. (2021). Conformal time-series forecasting.Advances in Neural Information Processing Systems , 34:6216–6228.Syntetos, A. A., Boylan, J. E., and Disney, S. M. (2009). Forecasting for inventory planning: a 50-yearreview. Journal of the Operational Research Society , 60:S149–S160.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Ting, K. M. and Witten, I. H. (1997). Stacking bagged and dagged models.Tornede, A., Deng, D., Eimer, T., Giovanelli, J., Mohan, A., Ruhkopf, T., Segel, S., Theodorakopoulos,D., Tornede, T., Wachsmuth, H., et al. (2023). AutoML in the age of large language models:Current challenges, future opportunities and risks. arXiv preprint arXiv:2306.08107 .Tripathy, D. S. and Prusty, B. R. (2021). Forecasting of renewable generation for applications insmart grid power systems. In Advances in Smart Grid Power System , pages 265–298. Elsevier.Van Kuppevelt, D., Meijer, C., Huber, F., van der Ploeg, A., Georgievska, S., and van Hees, V. T.(2020). Mcfly: Automated deep learning on time series. SoftwareX , 12:100548.Wang, X., Hyndman, R. J., Li, F., and Kang, Y. (2022). Forecast combinations: an over 50-year review.International Journal of Forecasting .Wen, R., Torkkola, K., Narayanaswamy, B., and Madeka, D. (2017). A multi-horizon quantilerecurrent forecaster. arXiv preprint arXiv:1711.11053 .Xu, C. and Xie, Y. (2021). Conformal prediction interval for dynamic time-series. In InternationalConference on Machine Learning , pages 11559–11569. PMLR.Zimmer, L., Lindauer, M., and Hutter, F. (2021). Auto-PyTorch: Multi-fidelity metalearning forefficient and robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence ,43(9):3079–3090.15A Supplementary MaterialsA.1 Evaluation MetricsMASE. Mean absolute scaled error is the standard metric for evaluating the accuracy of pointforecasts.MASE =1NN∑︁i=11HÍHh=1|yi,T+h−ˆyi,T+h|ÍT−st=1|yi,t+s−yi,t|MASE is scale-invariant and does not suffer from the limitations of other metrics, such as beingundefined when the target time series equals zero (Hyndman and Athanasopoulos, 2018). Wecompute the metric using the median (0.5 quantile) forecast produced by each model.wQL. Weighted quantile loss for a single quantile level qis defined aswQL[q]=2ÍNi=1ÍHh=1hq·max(yi,T+h−ˆyqi,T+h,0)+(1−q)·max(ˆyqi,T+h−yi,T+h,0)iÍNi=1ÍHh=1|yi,T+h|In our experiments, we report the mean wQL averaged over 9 quantile levels Q={0.1,0.2,...,0.9}.wQL =1|Q|∑︁q∈QwQL[q]A.2 ReproducibilityWe ran all experiments using AutoMLBenchmark (Gijsbers et al., 2022). We provide afork of AMLB that includes all scripts necessary to reproduce the results from our pa-per in the following GitHub repository https://github.com/shchur/automlbenchmark/tree/autogluon-timeseries-automl23/autogluon_timeseries_automl23 .A.3 Model ConfigurationWe trained the baseline models DeepAR, TFT, AutoARIMA, AutoETS, AutoTheta with the defaulthyperparameter configurations provided by the respective libraries. For DeepAR and TFT, thelastprediction_length time steps of each time series were reserved as a validation set. Bothmodels were trained for the full duration of 4 hours, saving the parameters and evaluating thevalidation loss at each epoch. The parameters achieving the lowest validation loss were then usedfor prediction. No HPO was performed for these two models, as AutoPyTorch already trains similardeep learning models with HPO.For AutoPyTorch, we used the reference implementation by the authors.3We set the tar-get metric to "mean_MASE_forecasting" ,budget_type="epochs" ,min_budget=5 ,max_budget=50 ,and resampling_strategy=HoldoutValTypes.time_series_hold_out_validation . We also settorch_num_threads to 16 (the number of vCPU cores).In our experiments, we used AG–TS v0.8.2, the latest release at the time of publication. Weused the "best_quality" presets and set eval_metric to either "MASE" or"mean_wQuantileLoss" ,depending on the experiment. All other parameters of the TimeSeriesPredictor were set totheir default values. The "best_quality" presets include the following models: AutoETS, Au-toARIMA, Theta (from StatsForecast), DeepAR, PatchTST, TFT (from GluonTS), DirectTabular,RecursiveTabular (wrappers around AutoGluon–Tabular and MLForecast), plus the baseline meth-ods Naive and SeasonalNaive. The non-default hyperparameters of the individual models used bythebest_quality presets are provided in Table 6.3https://github.com/dengdifan/Auto-PyTorch/blob/ecml22_apt_ts/examples/APT-TS/APT_task.py16The guiding principle for developing the presets for AG–TS can be summarized as “keep defaultswhenever possible, except the cases where the defaults are clearly suboptimal”. For example, wesetallowmean=True for AutoARIMA to allow this model to handle time series with non-zeromean. For deep learning models, we increase the batch size from 32 to 64 since larger batch sizestypically lead to faster convergence for all deep learning models. The context_length is capped ata minimum value because the default setting context_length=prediction_length can result inmodels that ignore most of the history if prediction_length is very short. For PatchTST, we setthecontext_length to the value used in the respective publication (Nie et al., 2023).The versions of frameworks used in our experiments are listed in Table 7.Table 6: Non-default hyperparameters that AutoGluon sets for the underlying models. The remainingparameters are all set to their defaults in the respective libraries. Models not listed here(Naive, SeasonalNaive, AutoETS, DirectTabular, Theta) have all their hyperparameters set tothe default values.Model Hyperparameter ValueAutoARIMA allowmean Trueapproximation TrueDeepAR batch_size 64context_length max(10, 2 * prediction_length)num_samples 250PatchTST batch_size 64context_length 96TFT batch_size 64context_length max(64, 2 * prediction_length)RecursiveTabular tabular_hyperparameters {"GBM", "NN_TORCH"}Table 7: Versions of the frameworks used during evaluation.Framework VersionAutoGluon 0.8.2AutoPyTorch 0.2.1GluonTS 0.13.2MLForecast 0.7.3StatsForecast 1.5.0Python 3.9PyTorch 1.13.1+cpu17Table 8: Statistics of the benchmark datasets used in our experimental evaluation. Frequency isrepresented by pandas offset aliases. Seasonality depends on the frequency, and is used toconfigure statistical models and compute the MASE metric.Dataset # series # time steps Prediction length Frequency SeasonalityCar Parts 2,674 104,286 12 M 12CIF 2016 72 6,244 12 M 12COVID 266 48,412 30 D 7Electricity Hourly 321 8,428,176 48 H 24Electricity Weekly 321 47,508 8 W 1FRED-MD 107 76,612 12 M 12Hospital 767 55,224 12 M 12KDD Cup 2018 270 2,929,404 48 H 24M1 Monthly 617 44,892 18 M 12M1 Quarterly 203 8,320 8 Q 4M1 Yearly 181 3,429 6 Y 1M3 Monthly 1,428 141,858 18 M 12M3 Other 174 11,933 8 Q 1M3 Quarterly 756 30,956 8 Q 4M3 Yearly 645 14,449 6 Y 1M4 Daily 4,227 9,964,658 14 D 7M4 Hourly 414 353,500 48 H 24M4 Monthly 48,000 10,382,411 18 M 12M4 Quarterly 24,000 2,214,108 8 Q 4M4 Weekly 359 366,912 13 W 1M4 Yearly 22,974 707,265 6 Y 1NN5 Daily 111 81,585 56 D 7NN5 Weekly 111 11,655 8 W 1Pedestrian Counts 66 3,129,178 48 H 24Tourism Monthly 366 100,496 24 M 12Tourism Quarterly 427 39,128 8 Q 4Tourism Yearly 518 10,685 4 Y 1Vehicle Trips 262 45,253 7 D 7Web Traffic Weekly 145,063 15,376,678 8 W 118Table 9: Point forecast accuracy, as measured by MASE (lower is better). For non-deterministic methods(DeepAR, TFT, AutoPyTorch, AutoGluon) we report the mean and standard deviation of thescores computed over 5 random seeds. "d.n.f." denotes cases where a method did not generatea forecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 1.127 1.118 1.133 1.208 1.052 0.749 (0.001) 0.751 (0.002) 0.746 (0.0) 0.747 (0.0)CIF 2016 1.289 1.069 0.898 1.006 0.945 1.278 (0.088) 1.372 (0.085) 1.023 (0.069) 1.073 (0.006)COVID 8.977 6.029 5.907 7.719 5.884 7.166 (0.334) 5.192 (0.211) 4.911 (0.086) 5.805 (0.0)Electricity Hourly 1.405 d.n.f. 1.465 d.n.f. d.n.f. 1.251 (0.006) 1.389 (0.025) 1.420 (0.123) 1.227 (0.003)Electricity Weekly 3.037 3.009 3.076 3.113 3.077 2.447 (0.211) 2.861 (0.122) 2.322 (0.277) 1.892 (0.0)FRED-MD 1.101 0.478 0.505 0.564 0.498 0.634 (0.038) 0.901 (0.086) 0.682 (0.058) 0.656 (0.0)Hospital 0.921 0.820 0.766 0.764 0.753 0.771 (0.008) 0.814 (0.012) 0.770 (0.003) 0.741 (0.001)KDD Cup 2018 0.975 d.n.f. 0.988 1.010 d.n.f. 0.841 (0.036) 0.844 (0.065) 0.764 (0.047) 0.709 (0.026)M1 Monthly 1.314 1.152 1.083 1.092 1.045 1.117 (0.029) 1.534 (0.063) 1.278 (0.115) 1.235 (0.001)M1 Quarterly 2.078 1.770 1.665 1.667 1.622 1.742 (0.028) 2.099 (0.108) 1.813 (0.056) 1.615 (0.0)M1 Yearly 4.894 3.870 3.950 3.659 3.769 3.674 (0.161) 4.318 (0.122) 3.407 (0.078) 3.371 (0.007)M3 Monthly 1.146 0.934 0.867 0.855 0.845 0.960 (0.017) 1.062 (0.04) 0.956 (0.083) 0.822 (0.0)M3 Other 3.089 2.245 1.801 2.009 1.769 2.061 (0.182) 1.926 (0.028) 1.871 (0.024) 1.837 (0.004)M3 Quarterly 1.425 1.419 1.121 1.119 1.096 1.198 (0.037) 1.176 (0.036) 1.180 (0.032) 1.057 (0.002)M3 Yearly 3.172 3.159 2.695 2.608 2.627 2.694 (0.096) 2.818 (0.019) 2.691 (0.026) 2.520 (0.002)M4 Daily 1.452 1.153 1.228 1.149 1.145 1.145 (0.026) 1.176 (0.018) 1.152 (0.009) 1.156 (0.0)M4 Hourly 1.193 1.029 1.609 2.456 1.157 1.484 (0.151) 3.391 (0.442) 1.345 (0.404) 0.807 (0.001)M4 Monthly 1.079 0.812 0.803 0.834 0.780 0.933 (0.01) 0.947 (0.005) 0.851 (0.025) 0.782 (0.0)M4 Quarterly 1.602 1.276 1.167 1.183 1.148 1.367 (0.171) 1.277 (0.015) 1.176 (0.022) 1.139 (0.0)M4 Weekly 2.777 2.355 2.548 2.608 2.375 2.418 (0.026) 2.625 (0.038) 2.369 (0.177) 2.035 (0.001)M4 Yearly 3.966 3.720 3.077 3.085 3.032 3.858 (0.694) 3.220 (0.097) 3.093 (0.041) 3.019 (0.001)NN5 Daily 1.011 0.935 0.870 0.878 0.859 0.812 (0.01) 0.789 (0.004) 0.807 (0.021) 0.761 (0.004)NN5 Weekly 1.063 0.998 0.980 0.963 0.977 0.915 (0.085) 0.884 (0.012) 0.865 (0.025) 0.860 (0.0)Pedestrian Counts 0.369 d.n.f. 0.553 d.n.f. d.n.f. 0.309 (0.005) 0.373 (0.01) 0.354 (0.024) 0.312 (0.009)Tourism Monthly 1.631 1.585 1.529 1.666 1.469 1.461 (0.025) 1.719 (0.08) 1.495 (0.009) 1.442 (0.0)Tourism Quarterly 1.699 1.655 1.578 1.648 1.539 1.599 (0.062) 1.830 (0.047) 1.647 (0.034) 1.537 (0.002)Tourism Yearly 3.552 4.044 3.183 2.992 3.231 3.476 (0.165) 2.916 (0.197) 3.004 (0.053) 2.946 (0.007)Vehicle Trips 1.302 1.427 1.301 1.284 1.203 1.162 (0.016) 1.227 (0.02) 1.162 (0.019) 1.113 (0.0)Web Traffic Weekly 1.066 1.189 1.207 1.108 1.068 N/A 0.973 (0.022) 0.962 (0.01) 0.938 (0.0)19Table 10: Probabilistic forecast accuracy, as measured by wQL (lower is better). For non-deterministicmethods (DeepAR, TFT, AutoGluon) we report the mean and standard deviation of the scorescomputed over 5 random seeds. "d.n.f." denotes cases where a method did not generate aforecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoGluonCar Parts 1.717 1.589 1.338 1.367 1.324 0.963 (0.009) 0.878 (0.004) 0.923 (0.0)CIF 2016 0.031 0.017 0.039 0.027 0.028 0.114 (0.024) 0.010 (0.002) 0.019 (0.0)COVID 0.140 0.030 0.046 0.094 0.046 0.072 (0.02) 0.031 (0.003) 0.030 (0.0)Electricity Hourly 0.108 d.n.f. 0.100 d.n.f. d.n.f. 0.081 (0.002) 0.097 (0.001) 0.076 (0.0)Electricity Weekly 0.141 0.138 0.144 0.146 0.141 0.123 (0.041) 0.118 (0.011) 0.088 (0.0)FRED-MD 0.104 0.056 0.050 0.057 0.054 0.054 (0.021) 0.114 (0.011) 0.056 (0.0)Hospital 0.062 0.058 0.053 0.055 0.053 0.053 (0.001) 0.054 (0.001) 0.051 (0.0)KDD Cup 2018 0.489 d.n.f. 0.550 0.553 d.n.f. 0.363 (0.014) 0.488 (0.054) 0.323 (0.014)M1 Monthly 0.153 0.146 0.163 0.159 0.152 0.136 (0.008) 0.224 (0.016) 0.135 (0.0)M1 Quarterly 0.119 0.088 0.081 0.082 0.083 0.084 (0.003) 0.093 (0.006) 0.090 (0.0)M1 Yearly 0.184 0.160 0.139 0.137 0.142 0.142 (0.029) 0.127 (0.004) 0.134 (0.001)M3 Monthly 0.124 0.102 0.093 0.095 0.092 0.098 (0.001) 0.109 (0.003) 0.089 (0.0)M3 Other 0.047 0.035 0.032 0.035 0.031 0.036 (0.002) 0.033 (0.001) 0.031 (0.0)M3 Quarterly 0.083 0.079 0.069 0.070 0.068 0.073 (0.001) 0.071 (0.001) 0.065 (0.0)M3 Yearly 0.141 0.162 0.129 0.128 0.128 0.117 (0.002) 0.133 (0.001) 0.114 (0.0)M4 Daily 0.030 0.023 0.025 0.023 0.023 0.023 (0.0) 0.023 (0.0) 0.022 (0.0)M4 Hourly 0.039 0.036 0.070 0.041 0.037 0.065 (0.03) 0.038 (0.002) 0.030 (0.001)M4 Monthly 0.109 0.085 0.085 0.088 0.082 0.092 (0.003) 0.089 (0.001) 0.081 (0.0)M4 Quarterly 0.099 0.082 0.079 0.079 0.076 0.084 (0.005) 0.083 (0.001) 0.075 (0.0)M4 Weekly 0.073 0.050 0.052 0.053 0.050 0.046 (0.001) 0.049 (0.001) 0.041 (0.0)M4 Yearly 0.138 0.130 0.111 0.115 0.109 0.124 (0.006) 0.116 (0.004) 0.104 (0.0)NN5 Daily 0.292 0.169 0.162 0.188 0.164 0.148 (0.002) 0.145 (0.001) 0.140 (0.0)NN5 Weekly 0.142 0.090 0.088 0.090 0.089 0.084 (0.007) 0.085 (0.001) 0.078 (0.0)Pedestrian Counts 0.675 d.n.f. 0.764 d.n.f. d.n.f. 0.230 (0.006) 0.261 (0.008) 0.238 (0.013)Tourism Monthly 0.088 0.095 0.101 0.091 0.085 0.086 (0.005) 0.103 (0.01) 0.083 (0.0)Tourism Quarterly 0.099 0.098 0.070 0.061 0.070 0.068 (0.002) 0.083 (0.005) 0.072 (0.0)Tourism Yearly 0.170 0.156 0.157 0.176 0.155 0.141 (0.016) 0.102 (0.006) 0.152 (0.0)Vehicle Trips 0.112 0.100 0.115 0.120 0.103 0.090 (0.002) 0.099 (0.005) 0.087 (0.0)Web Traffic Weekly 0.936 0.475 8·10130.503 0.474 N/A 0.223 (0.011) 0.225 (0.0)20Table 11: Average run time of each method (in minutes).Dataset SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 0.1 2.4 0.6 0.7 3.3 6.9 9.2 240.3 17.4CIF 2016 0.1 0.4 0.5 0.6 1.3 4.1 6.2 240.2 16.7COVID 0.1 1.4 0.5 0.7 2.3 7.9 8.8 240.4 29.3Electricity Hourly 0.2 >360 21.6 >360 >360 10.4 19.5 240.4 61.2Electricity Weekly 0.2 0.3 0.4 0.5 1.0 3.1 6.6 240.2 14.9FRED-MD 0.1 2.4 0.7 0.6 3.4 6.8 5.5 240.2 16.8Hospital 0.1 0.9 0.7 0.7 2.1 4.6 7.6 240.2 17.4KDD Cup 2018 0.1 >360 16.3 22.8 >360 12.4 11.9 240.3 56.0M1 Monthly 0.1 1.5 0.8 0.7 2.7 5.5 6.2 240.2 21.6M1 Quarterly 0.1 0.3 0.5 0.7 1.3 5.9 5.4 240.2 15.6M1 Yearly 0.1 0.3 0.4 0.4 0.9 4.2 5.2 240.2 12.9M3 Monthly 0.1 4.0 1.0 0.8 5.8 5.1 5.9 240.3 24.2M3 Other 0.1 0.3 0.4 0.4 0.9 5.0 6.0 240.2 13.6M3 Quarterly 0.1 0.5 0.6 0.7 1.6 4.6 6.0 240.3 15.7M3 Yearly 0.1 0.4 0.5 0.4 1.0 5.9 5.4 240.2 12.7M4 Daily 0.2 28.5 33.0 25.3 82.3 6.8 8.4 240.3 68.7M4 Hourly 0.1 84.9 1.8 0.8 89.5 9.2 10.9 240.2 51.2M4 Monthly 0.3 296.0 37.6 7.7 340.3 4.9 7.9 242.0 112.1M4 Quarterly 0.2 15.7 6.2 1.6 23.2 4.7 7.6 240.9 62.3M4 Weekly 0.1 0.6 0.5 1.3 2.2 5.6 7.8 240.3 20.8M4 Yearly 0.2 4.3 0.8 0.7 5.6 4.2 6.1 240.8 35.6NN5 Daily 0.1 2.5 0.5 0.6 3.3 7.3 10.9 240.3 37.4NN5 Weekly 0.1 0.3 0.4 0.4 1.0 3.6 6.4 240.2 13.7Pedestrian Counts 0.1 >360 4.9 >360 >360 13.5 16.7 240.7 56.4Tourism Monthly 0.1 10.2 0.8 0.7 13.1 4.4 7.6 240.2 26.0Tourism Quarterly 0.1 0.9 0.6 0.7 1.8 3.6 6.3 240.2 14.6Tourism Yearly 0.1 0.3 0.4 0.4 1.0 3.5 5.8 240.3 12.4Vehicle Trips 0.1 1.1 0.6 0.7 2.2 5.1 7.3 240.2 16.0Web Traffic Weekly 0.2 42.3 3.7 6.2 52.8 N/A 8.3 260.5 106.021
B8ia9_6TOo
XHIY3cQ8Tew
automl.cc/AutoML/2023/ABCD_Track
2023
AutoGluon–TimeSeries: AutoML for Probabilistic Time Series Forecasting
["Oleksandr Shchur", "Ali Caner Turkmen", "Nick Erickson", "Huibin Shen", "Alexander Shirkov", "Tony Hu", "Bernie Wang"]
We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic time series forecasting. Focused on ease of use and robustness, AutoGluon–TimeSeries enables users to generate accurate point and quantile forecasts with just 3 lines of Python code. Built on the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles of diverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning based forecasting approaches, and ensembling techniques. In our evaluation on 29 benchmark datasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforming a range of forecasting methods in terms of both point and quantile forecast accuracy, and often even improving upon the best-in-hindsight combination of prior methods.
["AutoML", "forecasting", "time series", "probabilistic forecasting"]
AutoGluon–TimeSeries:AutoML for Probabilistic Time Series ForecastingOleksandr Shchur1Caner Turkmen1Nick Erickson1Huibin Shen2Alexander Shirkov1Tony Hu1Yuyang Wang21Amazon Web Services2AWS AI LabsAbstract We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic timeseries forecasting.1Focused on ease of use and robustness, AutoGluon–TimeSeries enablesusers to generate accurate point and quantile forecasts with just 3 lines of Python code. Builton the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles ofdiverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning basedforecasting approaches, and ensembling techniques. In our evaluation on 29 benchmarkdatasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforminga range of forecasting methods in terms of both point and quantile forecast accuracy, andoften even improving upon the best-in-hindsight combination of prior methods.1 IntroductionTime series (TS) forecasting is a fundamental statistical problem with applications in diversedomains such as inventory planning (Syntetos et al., 2009), smart grids (Hong et al., 2020), andepidemiology (Nikolopoulos et al., 2021). Decades of research led to development of variousforecasting approaches, from simple statistical models (Hyndman and Athanasopoulos, 2018) toexpressive deep-learning-based architectures (Benidis et al., 2022). Despite the availability of variousforecasting approaches, practitioners often struggle with selecting the most appropriate methodand adhering to best practices when implementing and evaluating forecasting pipelines.AutoML aims to mitigate these challenges by providing tools that enable practitioners to developaccurate and efficient predictive models without extensive domain knowledge. While traditionalAutoML methods have focused primarily on classification and regression tasks for tabular data(Thornton et al., 2013; Feurer et al., 2015; Olson and Moore, 2016; Erickson et al., 2020; LeDell andPoirier, 2020; Zimmer et al., 2021), automated time series forecasting has received comparativelyless attention, with only a few open-source AutoML forecasting frameworks having been proposed(Deng et al., 2022; Catlin, 2022). Furthermore, existing automated forecasting frameworks tend togenerate point forecasts without considering uncertainty, which is a crucial factor in many practicalapplications (Gneiting and Katzfuss, 2014).To close this gap, we introduce AutoGluon–TimeSeries (AG–TS), an open-source AutoML frame-work for probabilistic time series forecasting written in Python. AG–TS can generate both pointand probabilistic forecasts for collections of univariate time series. Together with support for staticand time-varying covariates, this makes AG–TS applicable to most real-world forecasting tasks.As part of the AutoGluon framework (Erickson et al., 2020; Shi et al., 2021), AG–TS adheres tothe principles of ease of use and robustness, empowering users with limited expertise in the targetdomain to generate highly accurate predictions with minimal coding effort. The architecture is1https://github.com/autogluon/autogluonAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Point forecast (left) and quantile forecast (right) for a univariate time series.capable of handling failures of individual models when necessary, producing a valid result as longas any single model was trained successfully.We evaluate the performance of AG–TS against other established forecasting methods andAutoML systems using 29 publicly available benchmark datasets. The results demonstrate AG–TS’s strong performance, outperforming various competing approaches in terms of both pointand probabilistic forecast accuracy. This highlights the potential of AG–TS as a valuable tool forpractitioners and researchers seeking an automated and versatile solution for time series forecasting.2 Probabilistic Time Series ForecastingThe probabilistic time series forecasting problem can be formally stated as follows. The dataD={yi,1:Ti}Ni=1is a collection of Nunivariate time series, where yi,1:Ti=(yi,1,...,yi,T i),yi,tis thevalue of the i-th time series at time t, andTiis the length of the i-th time series.2For example,yi,tmay correspond to the number of units of product isold on day t. The goal of time seriesforecasting is to predict the future Hvalues for each time series in D. The parameter His knownasprediction length orforecast horizon .Each time series yi,1:Tmay additionally be associated with covariates Xi,1:T+H. These includeboth static covariates (e.g., location of the store, product ID) and time-varying covariates . Thetime-varying covariates may, in turn, be known in the future (e.g., day of the week, promotions) oronly known in the past (e.g., weather, sales of other products).In the most general form, the goal of probabilistic forecasting is to model the conditionaldistribution of the future time series values yi,T+1:T+Hgiven the past values yi,1:Tand the relatedcovariates Xi,1:T+Hp(yi,T+1:T+H|yi,1:T,Xi,1:T+H).In practice, we are rarely interested in the full predictive distribution and rather represent therange of possible outcomes with quantile forecasts ˆyqi,T+1:T+Hfor chosen quantile levels q∈(0,1).The quantile forecast implies that the future time series value yi,T+his predicted to exceed ˆyqi,T+hwith probability q(Wen et al., 2017; Lim et al., 2021).If the uncertainty is of no interest, we can instead report a point forecast of the future timeseries values. For example, we can summarize the prediction using the conditional meanˆyi,T+1:T+H=Ep[yi,T+1:T+H|yi,1:T,Xi,1:T+H].Figure 1 demonstrates the difference between a point forecast and a quantile forecast. Finally, notethat here we consider the problem of forecasting multiple univariate time series, also known aspanel data, which is different from multivariate forecasting (Benidis et al., 2022).2To reduce clutter in notation, we assume that all time series have the same length T(even though AG–TS supportsthe case when time series have different lengths).23 AutoGluon–TimeSeriesAutoGluon–TimeSeries enables users to generate probabilistic time series forecasts in a few linesof code, as shown by the following minimal example.1from autogluon . timeseries import TimeSeriesDataFrame , TimeSeriesPredictor23train_data = TimeSeriesDataFrame . from_path (" train . csv ")4predictor = TimeSeriesPredictor ( prediction_length =30) . fit ( train_data )5predictions = predictor . predict ( train_data ) # forecast next 30 time stepsLoading the data. ATimeSeriesDataFrame object stores a collection of univariate time series andprovides utilities such as loading data from disk and train-test splitting. Internally, time series datais represented as a pandas.DataFrame (pandas development team, 2020) in long format (Table 1),but loaders are also available for other formats. Besides the target time series that need to beforecast, TimeSeriesDataFrame can also store the static and time-varying covariates.Table 1: Collection of univariate time series stored as a TimeSeriesDataFrame . Each row containsunique ID of the time series, timestamp, and the value of the target time series.item_id timestamp targetT1 2020-03-02 23T1 2020-03-03 43·········T999 2020-08-29 15T999 2020-08-31 27Defining the task. Users can specify the forecasting task by creating a TimeSeriesPredictorobject. Task definition includes information such as prediction length , list of quantile levels tobe predicted, and the evaluation metric . The evaluation metric should be chosen based on thedownstream application. For example, mean weighted quantile loss (wQL) measures the accuracy ofquantile forecasts, and mean absolute scaled error (MASE) reports the accuracy of the point forecastrelative to a naive baseline. When creating the predictor, users can also specify what time-varyingcovariates are known in the future—the remainder will be treated as past-only covariates.Fitting the predictor. Inside the fit() method, the predictor preprocesses the data, fits andevaluates various models using cross-validation, optionally performs hyperparameter optimization(HPO) on selected models, and trains an ensemble of the individual forecasting models. By default,AG–TS provides user-friendly presets users can choose from to manage the training time–accuracytradeoff. Advanced users can also explicitly specify the models to use and their hyperparameters,or specify search spaces in which optimal hyperparameters will be searched.Making predictions. After the predictor has been fit, the predict() method can be used to generatepredictions on new data—including time series that haven’t been seen during training. Like theinput data, the predictions are stored in a long-format data frame, where the columns contain themean (expected value) and quantile forecasts at the desired quantile levels (Table 2).Documentation. We provide various additional resources on the official website auto.gluon.ai.These include installation instructions, tutorials, and a cheatsheet summarizing the main features.3.1 Design ConsiderationsAG–TS was launched as a part of the AutoGluon suite (Erickson et al., 2020) in v0.5, building onthe foundation of AutoGluon and borrowing some design elements from other forecasting librarieslike GluonTS (Alexandrov et al., 2020). Since then, AG–TS has evolved into a full solution for timeseries forecasting. Below, we highlight some of AG–TS’s key design principles.3Table 2: Mean and quantile forecasts generated by a TimeSeriesPredictor . The forecasts include thenext prediction_length many time steps of each time series in the dataset.item_id timestamp mean 0.1 0.5 0.9T1 2020-09-01 17 10 16 23T1 2020-09-02 25 15 23 31··················T999 2020-09-29 33 21 33 36T999 2020-09-30 30 24 28 34Ensembles over HPO. AG–TS follows the AutoGluon philosophy, relying on ensembling techniquesinstead of HPO or neural architecture search. The library features a broad selection of modelswhose probabilistic forecasts are combined in an ensemble selection step (Caruana et al., 2004).AG–TS favors broadening the portfolio of forecasters over exploring the hyperparameter space ofany particular model. While AG–TS does support HPO techniques, HPO is excluded from mostpreset configurations to reduce training time and minimize overfitting on the validation data.Presets and default hyperparameters. In order to provide defaults that work well out of the box forusers that are not familiar with forecasting, AG–TS includes various presets —high-level configura-tion options that allow users to trade off between fast training and higher accuracy. AG–TS followsthe convention-over-configuration principle: all models feature default configurations of hyperpa-rameters that are expected to work well given the selected preset. At the same time, advanced usershave an option to manually configure individual models and use the TimeSeriesPredictor as aunified API for training, evaluating and combining various forecasting models (see documentationfor details).Model selection. Time series forecasting introduces unique challenges in model validation andselection. Importantly, as the main aim of the model is to generalize into the future , special carehas to be taken to define validation sets that are held out across time . The AG–TS API is designedwith this consideration. If the user does not explicitly specify a validation set, the library holds thewindow with last prediction_length time steps of each time series as a validation set. Optionally,multiple windows can be used to perform so-called backtesting .3.2 Forecasting ModelsThere are two families of approaches to forecasting in large panels of time series. The first approachis to fit local classical parametric statistical models to each individual time series. A second approachis built on expressive machine-learning-based approaches that are fit globally on all time series atonce. AG–TS features both approaches, incorporating forecasting models from both families andcombining them in an ensemble.Local models. This category contains conventional methods that capture simple patterns liketrend and seasonality. Examples include ARIMA (Box et al., 1970), Theta (Assimakopoulos andNikolopoulos, 2000) and ETS(Hyndman et al., 2008), as well as simple baselines like Seasonal Naive(Hyndman and Athanasopoulos, 2018). AG–TS relies on implementations of these provided byStatsForecast (Garza et al., 2022).The defining characteristic of local models is that a separate model is fit to each individualtime series in the dataset (Januschowski et al., 2020). This means that local models need to be re-fitwhen making predictions for new time series not seen during training. To mitigate this limitation,AG–TS caches the model predictions and parallelizes their fitting across CPU cores using Joblib(Joblib Development Team, 2020).4Global models. Unlike local models, a single global model is fitted to the entire dataset and usedto make predictions for all time series. Global models used by AG–TS can be subdivided intotwo categories: deep learning and tabular models. Deep-learning models such as DeepAR (Salinaset al., 2020), PatchTST (Nie et al., 2023), and Temporal Fusion Transformer (Lim et al., 2021) useneural networks to generate probabilistic forecasts for future data. AG–TS uses PyTorch-baseddeep learning models from GluonTS (Alexandrov et al., 2020). Tabular models like LightGBM (Keet al., 2017) operate by first converting the time series forecasting task into a tabular regressionproblem. This can be done either recursively —by predicting future time series values one at atime—or by directly forecasting all future values simultaneously (Januschowski et al., 2022). AG–TSrelies on regression models provided by AutoGluon–Tabular and uses MLForecast (Nixtla, 2023)for converting them into tabular forecasters.Global models typically provide faster inference compared to local models, since there isno need for re-training at prediction time. This, however, comes at the cost of longer trainingtimes since more parameters need to be estimated. Global models also naturally handle varioustypes of covariates and utilize information present across different time series, which is known ascross-learning (Semenoglou et al., 2021).Ensembling. After AG–TS finishes sequentially fitting the individual models, they are combinedusing 100 steps of the forward selection algorithm (Caruana et al., 2004). The output of the ensembleis a convex combination of the model predictions:ˆyensemblei,T+1:T+H=M∑︁m=1wm·ˆy(m)i,T+1:T+Hsubject towm≥0,M∑︁m=1wm=1,where ˆy(m)i,T+1:T+Hare either point or quantile forecasts generated by each of the Mtrained models.Note that in case of probabilistic forecasting, the ensemble computes a weighted average of thequantile forecasts of the individual models—method known as Vincentization (Ratcliff, 1979).The ensemble weights wmare tuned to optimize the chosen evaluation metric (e.g., wQL,MASE) on the out-of-fold predictions generated using time series cross-validation (Hyndman andAthanasopoulos, 2018). The main advantages of the forward selection algorithm are its simplicity,compatibility with arbitrary evaluation metrics, and the sparsity of the final ensemble.4 Related workTime series forecasting is a challenging task, and the idea of automated forecasting has long intriguedstatistics and ML researchers. An early influential work on automated forecasting was the Rpackageforecast (Hyndman and Khandakar, 2008) that introduced the AutoETS and AutoARIMA models.These models automatically tune their parameters (e.g., trend, seasonality) for each individual timeseries using an in-sample information criterion.The following decade saw the growing focus on deep learning models for time series (Benidiset al., 2022; Wen et al., 2017; Salinas et al., 2020; Lim et al., 2021; Oreshkin et al., 2020). Several workshave explored how such neural-network-based models can be combined with AutoML techniques togenerate automated forecasting solutions (Van Kuppevelt et al., 2020; Shah et al., 2021; Javeri et al.,2021). Another line of research focused on optimizing the entire forecasting pipeline—includingdata preprocessing and feature engineering—not just hyperparameter tuning for individual models(Dahl, 2020; Kurian et al., 2021; da Silva et al., 2022). A recent survey by Meisenbacher et al. (2022)provides an overview of such automated pipelines.Even though AutoML for forecasting is becoming an active research topic, few of the recentdevelopments have found their way from academic papers to software packages. Available open-source AutoML forecasting libraries include AutoPyTorch–Forecasting (Deng et al., 2022), AutoTS(Catlin, 2022) and PyCaret (Ali, 2020). In contrast to these frameworks, AG–TS supports probabilisticforecasting and focuses on ease of use, allowing users to generate forecasts in a few lines of code.55 Experiments5.1 SetupThe goal of our experiments is to evaluate the point and probabilistic forecast accuracy of AG–TS.As baselines, we use various statistical and ML-based forecasting methods.Baseline methods. AutoARIMA ,AutoETS , and AutoTheta are established statistical forecastingmodels that automatically tune model parameters for each time series individually based on aninformation criterion (Hyndman et al., 2008). This means, such models do not require a validationset and use in-sample statistics for model tuning. StatEnsemble is defined by taking the median ofthe predictions of the three statistical models. Such statistical ensembles, despite their simplicity,have been shown to achieve competitive results in forecasting competitions (Makridakis et al.,2018). We use Python implementations of all these methods provided by the StatsForecast library(Garza et al., 2022). We additionally use Seasonal Naive as a sanity-check baseline that all othermethods are compared against (Hyndman and Athanasopoulos, 2018).For ML-based methods, we include two established deep learning forecasting models, DeepAR(Salinas et al., 2020) and Temporal Fusion Transformer (TFT) (Lim et al., 2021). We use the PyTorchimplementations of these models provided by GluonTS (Alexandrov et al., 2020). Finally, we includethe AutoML forecasting framework AutoPyTorch–Forecasting (Deng et al., 2022) to our comparison.AutoPyTorch builds deep learning forecasting models by combining neural architecture search (e.g.,by trying various encoder modules) and hyperparameter optimization (e.g., by tuning the learningrate). The search process is powered by a combination of Bayesian and multi-fidelity optimization.Similar to AutoGluon, the models are combined using ensemble selection (Caruana et al., 2004).Datasets. In our evaluation we use 29 publicly available forecasting benchmark datasets providedvia GluonTS. These include datasets from the Monash Forecasting Repository (Godahewa et al.,2021), such as the M1, M3 and M4 competition data (Makridakis and Hibon, 2000; Makridakis et al.,2018). We selected the datasets from the Monash Repository that contain more than a single timeseries and fewer than 15M total time steps. Our selection of datasets covers various scenarios thatcan be encountered in practice—from small datasets (M1 and M3), to datasets with a few long timeseries (Electricity, Pedestrian Counts) and large collections of medium-sized time series (M4). Acomprehensive list of dataset statistics are provided in Table 8 in the appendix.Configuration. We train the TimeSeriesPredictor from AG–TS with best_quality presets, asthese are designed to produce the most accurate forecasts, and set the time_limit to 4 hours. Notethat the presets were fixed a priori and not optimized using the benchmark datasets. DeepAR andTFT are also trained for up to 4 hours with early stopping on validation loss with patience set to200. For these models, the model checkpoint achieving the best validation loss is used to generatethe test predictions. The time limit for AutoPyTorch is similarly set to 4 hours. We set no time limitfor the remaining statistical models, as they do not support such functionality. In case the runtimeof a single experiment exceeds 6 hours, the job is interrupted and the result is marked as failure.More details about the configuration are available in Appendix A.3.All models are trained using AWS m6i.4xlarge cloud instances (16 vCPU cores, 64 GB RAM). Weuse CPU instances to fairly evaluate the CPU-only baselines, though AG–TS additionally supportsGPU training. Each run is repeated 5 times using different random seeds for non-deterministicmodels. We run all experiments using AutoMLBenchmark (Gijsbers et al., 2022). In the supplement,we provide full configuration details and the scripts for reproducing all experiments.5.2 Forecasting AccuracyWe measure the accuracy of the point forecasts by reporting the mean absolute scaled error(MASE) of all forecasting methods on all benchmark datasets. AG–TS and AutoPyTorch are trained6Table 3: Point forecast accuracy comparison of baseline methods with AutoGluon (based on the MASEmetric) on 29 datasets. Listed are the number datasets where each method produced: lowererror than AutoGluon (Wins), higher error (Losses), error within 0.001 (Ties), error duringprediction (Failures), or the lowest error among all methods (Champion). Average rank andaverage error are computed using the datasets where no method failed. We rescale the errorsfor each dataset between [0,1]to ensure that averaging is meaningful. The final columnreports the win rate versus the Seasonal Naive baseline. Individual results are given in Table 9.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (MASE) - - - 0 19 2.08 0.073 100.0%StatEnsemble 6 20 0 3 3 3.12 0.238 82.8 %AutoPyTorch (MASE) 4 25 0 0 2 4.12 0.257 93.1%AutoETS 4 25 0 0 1 4.64 0.374 75.9 %AutoTheta 4 23 0 2 0 4.92 0.427 72.4 %DeepAR 4 24 0 1 2 5.08 0.434 93.1 %AutoARIMA 4 22 0 3 1 5.92 0.612 79.3 %TFT 2 27 0 0 1 6.12 0.635 75.9 %Table 4: Probabilistic forecast accuracy comparison of each baseline method with AutoGluon (based onthe wQL metric) on 29 datasets. The columns are defined as in Table 3. Results for individualmodels and datasets are given in Table 10.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (wQL) - - - 0 19 1.80 0.086 100.0%StatEnsemble 3 23 0 3 0 3.36 0.330 86.2%DeepAR 5 23 0 1 1 4.08 0.455 89.7%TFT 5 24 0 0 5 4.24 0.487 89.7%AutoETS 3 26 0 0 2 4.40 0.489 69.0%AutoTheta 2 25 0 2 1 5.00 0.545 69.0%AutoARIMA 4 22 0 3 1 5.12 0.641 82.8%to optimize the MASE metric, while all other models are trained using their normal trainingprocedure. We report the aggregate statistics in Table 3, and provide the full results for individualmodels and datasets in Table 9 in the appendix.We measure the accuracy of the probabilistic (quantile) forecasts by reporting the meanweighted quantile loss (wQL) averaged over 9 quantile levels q∈{0.1,0.2,...,0.9}. AG–TS isconfigured to optimize the wQL metric. We exclude AutoPyTorch from this comparison since thisframework does not support probabilistic forecasting. We report the aggregate statistics in Table 4,and provide the full results for individual models and datasets in Table 10 in the appendix.Some of the frameworks failed to generate forecasts on certain datasets. AutoARIMA, AutoThetaand StatEnsemble did not finish training on some datasets (Electricity–Hourly, KDD Cup 2018,and Pedestrian Counts) within 6 hours. This is caused by the poor scaling of these models to verylong time series. DeepAR model fails on one dataset (Web Traffic Weekly) due to numerical errorsencountered during training.Discussion. The results demonstrate that AG–TS outperforms all other frameworks, achieving thebest average rank and rescaled error for both point and probabilistic forecasts, and even beatingthe best-in-hindsight competing method on 19 out of 29 datasets.StatEnsemble places second after AG–TS. The statistical ensemble performs especially well onsmall datasets such as M1 and M3. This demonstrates that in the low-data regime simple approaches,7Figure 2: Total runtime of each framework across all datasets. AutoGluon always completes trainingand prediction under the time limit and achieves a mean runtime of 33 minutes. AutoPyTorchis always trained for the full 4 hour time limit. Statistical models train faster in most cases,but may take an extremely long time to train on datasets with long time series. The runtimesfor individual models and datasets are provided in Table 11.like ensembling by taking the median, may perform better than the learned ensemble selectionstrategy employed by both AutoML frameworks.AutoPyTorch achieves similar performance to StatEnsemble in point forecasting across mostperformance indicators. Interestingly, AG–TS tends to outperform AutoPyTorch on larger datasetslike M4. This means that AG–TS’s strategy of training various light-weight models performs wellin this setting under the limited time budget. Also note, configuring AutoPyTorch requires morecode and domain knowledge, compared to the 3 lines of code necessary to reproduce the aboveresults by AG–TS.Deep learning models DeepAR and TFT perform well in terms of probabilistic forecasting, butfall behind simple statistical approaches in point forecasts. This makes sense, since the objectivefunctions optimized by these deep learning models are designed for probabilistic forecasting.5.3 Runtime ComparisonHigh accuracy is not the only important property of an AutoML system—the ability to generatepredictions in a reasonable amount of time is often necessary in practice. To evaluate the efficiency ofAG–TS, we compare its runtime with other frameworks. We visualize the runtime of each frameworkacross all datasets in Figure 2. Note that here we compare the total runtime defined as the sumof training and prediction times. This reflects the typical forecasting workflow in practice, wherethe forecast is generated once for each time series. Moreover, it’s hard to distinguish between thetraining and prediction time for local models, where a new model is trained for each new time series.AG–TS completes training and prediction under the 4-hour time limit for all 29 datasets, andachieves mean runtime of 33 minutes. While statistical models are faster on average, they can beextremely slow to train on datasets consisting of long time series. For instance, the runtimes ofAutoARIMA, AutoTheta and StatEnsemble exceed 6 hours for 3 datasets with long time series. Thedeep learning models DeepAR and TFT have higher median runtime compared to the statisticalmodels, but never reach the 4 hour time limit due to early stopping. Finally, AutoPyTorch alwaysconsumes the entire 4 hour time budget due to its design.To summarize, AG–TS is able to produce accurate forecasts under mild time budgets. While, onaverage, AG–TS takes more time than the individual models, it produces more accurate forecastsand avoids the extremely long runtimes sometimes exhibited by local models. The results alsodemonstrate that limited training time is better spent training and ensembling many diverse models(as done by AG–TS), rather than hyperparameter tuning a restricted set of models (as done byAutoPyTorch).8Table 5: Ablation study. We compare the point forecast accuracy of AutoGluon, where certain compo-nent models are removed, ensembling is disabled, or the time limit is reduced. All versionsexcept AutoGluon-1h and AutoGluon-10m are trained for 4 hours. The columns are definedand the scores are computed as in Table 3.Framework Champion Average rank Average rescaled errorAutoGluon-1h 19 2.04 0.070AutoGluon-4h 19 2.08 0.073NoStatModels 16 2.12 0.094NoTabularModels 15 2.12 0.085NoDeepModels 15 2.28 0.124AutoGluon-10m 14 2.50 0.099NoEnsemble 7 3.52 0.1775.4 AblationsFinally, we perform ablations to understand the effect of different components on the final perfor-mance. We compare the point forecast accuracy of the TimeSeriesPredictor trained for 4 hourswith MASE evalauation metric (Section 5.2) against several variations with certain disabled com-ponents. First, we exclude some base models from the presets: statistical models ( NoStatModels ),deep learning models ( NoDeepModels ), and tabular models ( NoTabularModels ). We also considerreducing the time limit to 1 hour ( AutoGluon-1h ) or 10 minutes ( AutoGluon-10m ), as well disablingthe final ensembling step ( NoEnsemble ). In the latter case, AG–TS predicts using the model withthe best validation score. The rest of the setup is identical to Section 5.2.Table 5 shows the metrics for the different model variations, each compared to the baselinesfrom Section 5.2. AutoGluon-4h and AutoGluon-1h produce nearly identical results. This isnot surprising, as the 4-hour version finishes training under 1 hour for most datasets (Figure 2).Interestingly, AutoGluon achieves strong results even with a 10-minute time limit, achieving thebest average rank and outperforming the best-in-hindsight model on 14 out of 29 datasets.Removing the ensembling step has the most detrimental effect on the overall accuracy. Thishighlights the importance of ensembling, confirming the findings of other works (Makridakis et al.,2018; Borchert et al., 2022). The ablations also show that all 3 classes of models used by AutoGluonare important for the overall performance, deep learning models being the most critical component.6 Future WorkOur experiments demonstrate the strong forecasting accuracy achieved by AG–TS. Despite theseencouraging initial results, we aim to continue developing the library, adding new functionalityand further boost the forecasting performance. This includes incorporating the various ideas in thespace of AutoML for forecasting (Meisenbacher et al., 2022), with focus on the following directions.Ensembling. Advanced ensembling strategies, such as stacking (Ting and Witten, 1997), lie at thecore of modern high-performing AutoML systems (Erickson et al., 2020). How to best generalizethese techniques to probabilistic forecasting is an active, but still open research question (Gastingeret al., 2021; Wang et al., 2022).Calibration. Many practical tasks require guarantees on the uncertainty estimates associated withthe forecasts. Conformal prediction methods (Stankeviciute et al., 2021; Xu and Xie, 2021) provideone way to obtain such guarantees, and we plan to incorporate them into AG–TS in the future.New problem types. AG–TS supports the most common types of forecasting tasks, such as proba-bilistic forecasting or handling covariates. However, there are several settings that are currently (as9of v0.8) not supported. These include so-called cold-start forecasting (where little historic data isavailable) and generating forecast explanations (Rojat et al., 2021). Another interesting potentialapplication for AG–TS is assisting judgemental forecasting. In this context, AG–TS could serve as a“tool” queried by a large language model (LLM) (Schick et al., 2023) to generate qualitative forecasts.More generally, combinations of LLM with AutoML frameworks are an exciting direction for futurework (Tornede et al., 2023).Scalability. In our experiments we consider datasets with up to ≈107time steps across all time series.Modern applications, however, sometimes require operating on even larger scales. This wouldrequire improving efficiency of existing models and developing new efficient AutoML techniques.7 ConclusionsIn this work, we introduced AutoGluon–TimeSeries, a powerful and user-friendly open-sourceAutoML library for probabilistic time series forecasting. By combining statistical models and deeplearning forecasting approaches with ensembling techniques, AutoGluon–TimeSeries is able toachieve strong empirical results on a range of benchmark datasets. With the ability to generateaccurate point and quantile forecasts with just 3 lines of Python code, this framework is poised tomake time series forecasting more accessible and efficient for a wide range of users.8 Broader Impact StatementAutoGluon–TimeSeries enables users to generate accurate forecasts in a few lines of code. Thisdemocratizes machine learning, lowering the barrier to entry to forecasting for non-experts. Atthe same time, AutoGluon–TimeSeries can be used by experienced users to design highly accurateforecasting pipelines. More accurate forecasts can directly translate to real-world impact in variousdomains. For example, forecasting renewable energy generation is a crucial component of smartgrid management (Tripathy and Prusty, 2021); accurately predicting demand leads to more efficientinventory management and increased revenue (Makridakis et al., 2022).The potential negative impacts of the proposed approach are similar to those of other forecastingmodels. One such danger arises when the limitations of forecasting methods are not taken intoaccount in the context of decision making (e.g., when guiding policy decisions). As forecastingmodels only capture statistical dependencies, they may be misleading when trying to estimateeffects of actions or interventions.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] All claims are supported by the experimental evaluation inSection 5.(b) Did you describe the limitations of your work? [Yes] See Section 6.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 8.(d)Have you read the ethics author’s and review guidelines and ensured that your paper con-forms to them? https://automl.cc/ethics-accessibility/ [Yes] The paper conformsto the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] The paper containsno theoretical results.10(b)Did you include complete proofs of all theoretical results? [N/A] The paper contains notheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] All of the above included in the supplementary material.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] Results are provided in CSV format.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [No]We provide the raw data and describe the procedure in the paper, which should makereproducing the results and figures straightforward.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] The code is properly documented and we made surethat it can be executed in a fresh environment.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes] We use the standard evaluationprotocol: For all datasets, the last prediction_length time steps of each time series areheld out and used to evaluate the forecasts produced by each method. For hyperparameters,see Section A.3.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We carefully made sure that this is the case.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 5.4.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Allmethods use an identical evaluation protocol.(i)Did you compare performance over time? [Yes] We allocate the same runtime budget of 4hours to all methods. An ablation study is performed where the time limit is reduced to 1hour and 10 minutes for AutoGluon.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]For all non-deterministic methods, the experiments are repeated with five random seeds:1,2,3,4,5 .(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] Error metrics produced by all non-deterministic methods include themean and the standard deviation (see Tables 9 and 10).(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [No] These are notavailable for probabilistic time series forecasting.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] The compute infrastructure is describedin Section 5.1. The total runtime of all experiments equals approximately 6000 hours ( ≈#models×# seeds×# of datasets).11(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] We describe the hyperparameter settingsin Appendix A.3, in addition to providing the code that can be used to reproduce the results.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] References for all useddatasets and methods are provided in Section 5.1.(b)Did you mention the license of the assets? [Yes] This paper does not introduce any newpublic assets. The AutoGluon library is released under the Apache 2.0 License.(c)Did you include any new assets either in the supplemental material or as a url? [No] Thispaper does not introduce any new public assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The evaluation was performed using public benchmark datasets.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The evaluation was performed using publicbenchmark datasets.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not use crowdsourcing or conduct research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.ReferencesAlexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T.,Maddix, D. C., Rangapuram, S., Salinas, D., Schulz, J., et al. (2020). GluonTS: Probabilistic andneural time series modeling in Python. The Journal of Machine Learning Research , 21(1):4629–4634.Ali, M. (2020). PyCaret: An open source, low-code machine learning library in Python. https://www.pycaret.org .Assimakopoulos, V. and Nikolopoulos, K. (2000). The Theta model: A decomposition approach toforecasting. International journal of forecasting , 16(4):521–530.Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y., Maddix, D., Turkmen, C., Gasthaus, J.,Bohlke-Schneider, M., Salinas, D., Stella, L., et al. (2022). Deep learning for time series forecasting:Tutorial and literature survey. ACM Computing Surveys , 55(6):1–36.Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., and Günnemann, S. (2022). Multi-objectivemodel selection for time series forecasting. arXiv preprint arXiv:2202.08485 .Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. (1970). Time series analysis: forecastingand control . John Wiley & Sons.12Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from librariesof models. In Proceedings of the twenty-first international conference on Machine learning , page 18.Catlin, C. (2022). AutoTS: Automated time series forecasting. https://github.com/winedarksea/AutoTS .da Silva, F. R., Vieira, A. B., Bernardino, H. S., Alencar, V. A., Pessamilio, L. R., and Barbosa, H.J. C. (2022). Automated machine learning for time series prediction. In 2022 IEEE Congress onEvolutionary Computation (CEC) , pages 1–7. IEEE.Dahl, S. M. J. (2020). TSPO: an autoML approach to time series forecasting . PhD thesis.Deng, D., Karl, F., Hutter, F., Bischl, B., and Lindauer, M. (2022). Efficient automated deep learningfor time series forecasting. In Machine Learning and Knowledge Discovery in Databases: EuropeanConference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III , pages664–680. Springer.Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and accurate AutoML for structured data. arXiv preprint arXiv:2003.06505 .Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. Advances in neural information processing systems , 28.Garza, F., Mergenthaler Canseco, M., Challu, C., and Olivares, K. G. (2022). StatsForecast: Light-ning fast forecasting with statistical and econometric models. https://github.com/Nixtla/statsforecast (v1.15.0).Gastinger, J., Nicolas, S., Stepić, D., Schmidt, M., and Schülke, A. (2021). A study on ensemblelearning for time series forecasting and the need for meta-learning. In 2021 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE.Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J.(2022). AMLB: An AutoML benchmark. arXiv preprint arXiv:2207.12560 .Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and ItsApplication , 1:125–151.Godahewa, R., Bergmeir, C., Webb, G. I., Hyndman, R. J., and Montero-Manso, P. (2021). Monashtime series forecasting archive. In Neural Information Processing Systems Track on Datasets andBenchmarks .Hong, T., Pinson, P., Wang, Y., Weron, R., Yang, D., and Zareipour, H. (2020). Energy forecasting: Areview and outlook. IEEE Open Access Journal of Power and Energy , 7:376–388.Hyndman, R., Koehler, A. B., Ord, J. K., and Snyder, R. D. (2008). Forecasting with exponentialsmoothing: the state space approach . Springer Science & Business Media.Hyndman, R. J. and Athanasopoulos, G. (2018). Forecasting: principles and practice . OTexts.Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: the forecast packagefor R. Journal of statistical software , 27:1–22.Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., and Callot,L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting ,36(1):167–177.13Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H., and Gasthaus, J. (2022). Forecastingwith trees. International Journal of Forecasting , 38(4):1473–1481.Javeri, I. Y., Toutiaee, M., Arpinar, I. B., Miller, J. A., and Miller, T. W. (2021). Improving neuralnetworks for time-series forecasting using data augmentation and AutoML. In 2021 IEEE SeventhInternational Conference on Big Data Computing Service and Applications (BigDataService) , pages1–8. IEEE.Joblib Development Team (2020). Joblib: Running Python functions as pipeline jobs. https://joblib.readthedocs.io/ (v1.2.0).Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). Lightgbm:A highly efficient gradient boosting decision tree. Advances in Neural Information ProcessingSystems , 30.Kurian, J. J., Dix, M., Amihai, I., Ceusters, G., and Prabhune, A. (2021). BOAT: A Bayesian optimiza-tion autoML time-series framework for industrial applications. In 2021 IEEE Seventh InternationalConference on Big Data Computing Service and Applications (BigDataService) , pages 17–24. IEEE.LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable automatic machine learning. In Proceedingsof the AutoML Workshop at ICML , volume 2020.Lim, B., Arık, S. Ö., Loeff, N., and Pfister, T. (2021). Temporal fusion transformers for interpretablemulti-horizon time series forecasting. International Journal of Forecasting , 37(4):1748–1764.Makridakis, S. and Hibon, M. (2000). The M3 competition: Results, conclusions and implications.International journal of forecasting , 16(4):451–476.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 competition: Results, findings,conclusion and way forward. International Journal of Forecasting , 34(4):802–808.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2022). The M5 competition: Background,organization, and implementation. International Journal of Forecasting , 38(4):1325–1336.Meisenbacher, S., Turowski, M., Phipps, K., Rätz, M., Müller, D., Hagenmeyer, V., and Mikut, R.(2022). Review of automated time series forecasting pipelines. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery , 12(6):e1475.Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. (2023). A time series is worth 64 words:Long-term forecasting with transformers. International Conference on Learning Representations .Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., and Vasilakis, C. (2021). Forecasting andplanning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmen-tal decisions. European journal of operational research , 290(1):99–115.Nixtla (2023). MLForecast scalable machine learning for time series forecasting. v0.7.2.Olson, R. S. and Moore, J. H. (2016). TPOT: A tree-based pipeline optimization tool for automatingmachine learning. In Workshop on automatic machine learning , pages 66–74. PMLR.Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. (2020). N-beats: Neural basis expansionanalysis for interpretable time series forecasting.pandas development team (2020). pandas-dev/pandas: Pandas. https://doi.org/10.5281/zenodo.3509134 (v1.5.3).14Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics.Psychological bulletin , 86(3):446.Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and Díaz-Rodríguez, N. (2021). Explainableartificial intelligence (XAI) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 .Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecastingwith autoregressive recurrent networks. International Journal of Forecasting , 36(3):1181–1191.Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., andScialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprintarXiv:2302.04761 .Semenoglou, A.-A., Spiliotis, E., Makridakis, S., and Assimakopoulos, V. (2021). Investigating theaccuracy of cross-learning time series forecasting methods. International Journal of Forecasting ,37(3):1072–1084.Shah, S. Y., Patel, D., Vu, L., Dang, X.-H., Chen, B., Kirchner, P., Samulowitz, H., Wood, D., Bramble,G., Gifford, W. M., et al. (2021). AutoAI-TS: AutoAI for time series forecasting. In Proceedings ofthe 2021 International Conference on Management of Data , pages 2584–2596.Shi, X., Mueller, J., Erickson, N., Li, M., and Smola, A. (2021). Multimodal AutoML on structuredtables with text fields. In 8th ICML Workshop on Automated Machine Learning (AutoML) .Stankeviciute, K., M Alaa, A., and van der Schaar, M. (2021). Conformal time-series forecasting.Advances in Neural Information Processing Systems , 34:6216–6228.Syntetos, A. A., Boylan, J. E., and Disney, S. M. (2009). Forecasting for inventory planning: a 50-yearreview. Journal of the Operational Research Society , 60:S149–S160.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Ting, K. M. and Witten, I. H. (1997). Stacking bagged and dagged models.Tornede, A., Deng, D., Eimer, T., Giovanelli, J., Mohan, A., Ruhkopf, T., Segel, S., Theodorakopoulos,D., Tornede, T., Wachsmuth, H., et al. (2023). AutoML in the age of large language models:Current challenges, future opportunities and risks. arXiv preprint arXiv:2306.08107 .Tripathy, D. S. and Prusty, B. R. (2021). Forecasting of renewable generation for applications insmart grid power systems. In Advances in Smart Grid Power System , pages 265–298. Elsevier.Van Kuppevelt, D., Meijer, C., Huber, F., van der Ploeg, A., Georgievska, S., and van Hees, V. T.(2020). Mcfly: Automated deep learning on time series. SoftwareX , 12:100548.Wang, X., Hyndman, R. J., Li, F., and Kang, Y. (2022). Forecast combinations: an over 50-year review.International Journal of Forecasting .Wen, R., Torkkola, K., Narayanaswamy, B., and Madeka, D. (2017). A multi-horizon quantilerecurrent forecaster. arXiv preprint arXiv:1711.11053 .Xu, C. and Xie, Y. (2021). Conformal prediction interval for dynamic time-series. In InternationalConference on Machine Learning , pages 11559–11569. PMLR.Zimmer, L., Lindauer, M., and Hutter, F. (2021). Auto-PyTorch: Multi-fidelity metalearning forefficient and robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence ,43(9):3079–3090.15A Supplementary MaterialsA.1 Evaluation MetricsMASE. Mean absolute scaled error is the standard metric for evaluating the accuracy of pointforecasts.MASE =1NN∑︁i=11HÍHh=1|yi,T+h−ˆyi,T+h|ÍT−st=1|yi,t+s−yi,t|MASE is scale-invariant and does not suffer from the limitations of other metrics, such as beingundefined when the target time series equals zero (Hyndman and Athanasopoulos, 2018). Wecompute the metric using the median (0.5 quantile) forecast produced by each model.wQL. Weighted quantile loss for a single quantile level qis defined aswQL[q]=2ÍNi=1ÍHh=1hq·max(yi,T+h−ˆyqi,T+h,0)+(1−q)·max(ˆyqi,T+h−yi,T+h,0)iÍNi=1ÍHh=1|yi,T+h|In our experiments, we report the mean wQL averaged over 9 quantile levels Q={0.1,0.2,...,0.9}.wQL =1|Q|∑︁q∈QwQL[q]A.2 ReproducibilityWe ran all experiments using AutoMLBenchmark (Gijsbers et al., 2022). We provide afork of AMLB that includes all scripts necessary to reproduce the results from our pa-per in the following GitHub repository https://github.com/shchur/automlbenchmark/tree/autogluon-timeseries-automl23/autogluon_timeseries_automl23 .A.3 Model ConfigurationWe trained the baseline models DeepAR, TFT, AutoARIMA, AutoETS, AutoTheta with the defaulthyperparameter configurations provided by the respective libraries. For DeepAR and TFT, thelastprediction_length time steps of each time series were reserved as a validation set. Bothmodels were trained for the full duration of 4 hours, saving the parameters and evaluating thevalidation loss at each epoch. The parameters achieving the lowest validation loss were then usedfor prediction. No HPO was performed for these two models, as AutoPyTorch already trains similardeep learning models with HPO.For AutoPyTorch, we used the reference implementation by the authors.3We set the tar-get metric to "mean_MASE_forecasting" ,budget_type="epochs" ,min_budget=5 ,max_budget=50 ,and resampling_strategy=HoldoutValTypes.time_series_hold_out_validation . We also settorch_num_threads to 16 (the number of vCPU cores).In our experiments, we used AG–TS v0.8.2, the latest release at the time of publication. Weused the "best_quality" presets and set eval_metric to either "MASE" or"mean_wQuantileLoss" ,depending on the experiment. All other parameters of the TimeSeriesPredictor were set totheir default values. The "best_quality" presets include the following models: AutoETS, Au-toARIMA, Theta (from StatsForecast), DeepAR, PatchTST, TFT (from GluonTS), DirectTabular,RecursiveTabular (wrappers around AutoGluon–Tabular and MLForecast), plus the baseline meth-ods Naive and SeasonalNaive. The non-default hyperparameters of the individual models used bythebest_quality presets are provided in Table 6.3https://github.com/dengdifan/Auto-PyTorch/blob/ecml22_apt_ts/examples/APT-TS/APT_task.py16The guiding principle for developing the presets for AG–TS can be summarized as “keep defaultswhenever possible, except the cases where the defaults are clearly suboptimal”. For example, wesetallowmean=True for AutoARIMA to allow this model to handle time series with non-zeromean. For deep learning models, we increase the batch size from 32 to 64 since larger batch sizestypically lead to faster convergence for all deep learning models. The context_length is capped ata minimum value because the default setting context_length=prediction_length can result inmodels that ignore most of the history if prediction_length is very short. For PatchTST, we setthecontext_length to the value used in the respective publication (Nie et al., 2023).The versions of frameworks used in our experiments are listed in Table 7.Table 6: Non-default hyperparameters that AutoGluon sets for the underlying models. The remainingparameters are all set to their defaults in the respective libraries. Models not listed here(Naive, SeasonalNaive, AutoETS, DirectTabular, Theta) have all their hyperparameters set tothe default values.Model Hyperparameter ValueAutoARIMA allowmean Trueapproximation TrueDeepAR batch_size 64context_length max(10, 2 * prediction_length)num_samples 250PatchTST batch_size 64context_length 96TFT batch_size 64context_length max(64, 2 * prediction_length)RecursiveTabular tabular_hyperparameters {"GBM", "NN_TORCH"}Table 7: Versions of the frameworks used during evaluation.Framework VersionAutoGluon 0.8.2AutoPyTorch 0.2.1GluonTS 0.13.2MLForecast 0.7.3StatsForecast 1.5.0Python 3.9PyTorch 1.13.1+cpu17Table 8: Statistics of the benchmark datasets used in our experimental evaluation. Frequency isrepresented by pandas offset aliases. Seasonality depends on the frequency, and is used toconfigure statistical models and compute the MASE metric.Dataset # series # time steps Prediction length Frequency SeasonalityCar Parts 2,674 104,286 12 M 12CIF 2016 72 6,244 12 M 12COVID 266 48,412 30 D 7Electricity Hourly 321 8,428,176 48 H 24Electricity Weekly 321 47,508 8 W 1FRED-MD 107 76,612 12 M 12Hospital 767 55,224 12 M 12KDD Cup 2018 270 2,929,404 48 H 24M1 Monthly 617 44,892 18 M 12M1 Quarterly 203 8,320 8 Q 4M1 Yearly 181 3,429 6 Y 1M3 Monthly 1,428 141,858 18 M 12M3 Other 174 11,933 8 Q 1M3 Quarterly 756 30,956 8 Q 4M3 Yearly 645 14,449 6 Y 1M4 Daily 4,227 9,964,658 14 D 7M4 Hourly 414 353,500 48 H 24M4 Monthly 48,000 10,382,411 18 M 12M4 Quarterly 24,000 2,214,108 8 Q 4M4 Weekly 359 366,912 13 W 1M4 Yearly 22,974 707,265 6 Y 1NN5 Daily 111 81,585 56 D 7NN5 Weekly 111 11,655 8 W 1Pedestrian Counts 66 3,129,178 48 H 24Tourism Monthly 366 100,496 24 M 12Tourism Quarterly 427 39,128 8 Q 4Tourism Yearly 518 10,685 4 Y 1Vehicle Trips 262 45,253 7 D 7Web Traffic Weekly 145,063 15,376,678 8 W 118Table 9: Point forecast accuracy, as measured by MASE (lower is better). For non-deterministic methods(DeepAR, TFT, AutoPyTorch, AutoGluon) we report the mean and standard deviation of thescores computed over 5 random seeds. "d.n.f." denotes cases where a method did not generatea forecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 1.127 1.118 1.133 1.208 1.052 0.749 (0.001) 0.751 (0.002) 0.746 (0.0) 0.747 (0.0)CIF 2016 1.289 1.069 0.898 1.006 0.945 1.278 (0.088) 1.372 (0.085) 1.023 (0.069) 1.073 (0.006)COVID 8.977 6.029 5.907 7.719 5.884 7.166 (0.334) 5.192 (0.211) 4.911 (0.086) 5.805 (0.0)Electricity Hourly 1.405 d.n.f. 1.465 d.n.f. d.n.f. 1.251 (0.006) 1.389 (0.025) 1.420 (0.123) 1.227 (0.003)Electricity Weekly 3.037 3.009 3.076 3.113 3.077 2.447 (0.211) 2.861 (0.122) 2.322 (0.277) 1.892 (0.0)FRED-MD 1.101 0.478 0.505 0.564 0.498 0.634 (0.038) 0.901 (0.086) 0.682 (0.058) 0.656 (0.0)Hospital 0.921 0.820 0.766 0.764 0.753 0.771 (0.008) 0.814 (0.012) 0.770 (0.003) 0.741 (0.001)KDD Cup 2018 0.975 d.n.f. 0.988 1.010 d.n.f. 0.841 (0.036) 0.844 (0.065) 0.764 (0.047) 0.709 (0.026)M1 Monthly 1.314 1.152 1.083 1.092 1.045 1.117 (0.029) 1.534 (0.063) 1.278 (0.115) 1.235 (0.001)M1 Quarterly 2.078 1.770 1.665 1.667 1.622 1.742 (0.028) 2.099 (0.108) 1.813 (0.056) 1.615 (0.0)M1 Yearly 4.894 3.870 3.950 3.659 3.769 3.674 (0.161) 4.318 (0.122) 3.407 (0.078) 3.371 (0.007)M3 Monthly 1.146 0.934 0.867 0.855 0.845 0.960 (0.017) 1.062 (0.04) 0.956 (0.083) 0.822 (0.0)M3 Other 3.089 2.245 1.801 2.009 1.769 2.061 (0.182) 1.926 (0.028) 1.871 (0.024) 1.837 (0.004)M3 Quarterly 1.425 1.419 1.121 1.119 1.096 1.198 (0.037) 1.176 (0.036) 1.180 (0.032) 1.057 (0.002)M3 Yearly 3.172 3.159 2.695 2.608 2.627 2.694 (0.096) 2.818 (0.019) 2.691 (0.026) 2.520 (0.002)M4 Daily 1.452 1.153 1.228 1.149 1.145 1.145 (0.026) 1.176 (0.018) 1.152 (0.009) 1.156 (0.0)M4 Hourly 1.193 1.029 1.609 2.456 1.157 1.484 (0.151) 3.391 (0.442) 1.345 (0.404) 0.807 (0.001)M4 Monthly 1.079 0.812 0.803 0.834 0.780 0.933 (0.01) 0.947 (0.005) 0.851 (0.025) 0.782 (0.0)M4 Quarterly 1.602 1.276 1.167 1.183 1.148 1.367 (0.171) 1.277 (0.015) 1.176 (0.022) 1.139 (0.0)M4 Weekly 2.777 2.355 2.548 2.608 2.375 2.418 (0.026) 2.625 (0.038) 2.369 (0.177) 2.035 (0.001)M4 Yearly 3.966 3.720 3.077 3.085 3.032 3.858 (0.694) 3.220 (0.097) 3.093 (0.041) 3.019 (0.001)NN5 Daily 1.011 0.935 0.870 0.878 0.859 0.812 (0.01) 0.789 (0.004) 0.807 (0.021) 0.761 (0.004)NN5 Weekly 1.063 0.998 0.980 0.963 0.977 0.915 (0.085) 0.884 (0.012) 0.865 (0.025) 0.860 (0.0)Pedestrian Counts 0.369 d.n.f. 0.553 d.n.f. d.n.f. 0.309 (0.005) 0.373 (0.01) 0.354 (0.024) 0.312 (0.009)Tourism Monthly 1.631 1.585 1.529 1.666 1.469 1.461 (0.025) 1.719 (0.08) 1.495 (0.009) 1.442 (0.0)Tourism Quarterly 1.699 1.655 1.578 1.648 1.539 1.599 (0.062) 1.830 (0.047) 1.647 (0.034) 1.537 (0.002)Tourism Yearly 3.552 4.044 3.183 2.992 3.231 3.476 (0.165) 2.916 (0.197) 3.004 (0.053) 2.946 (0.007)Vehicle Trips 1.302 1.427 1.301 1.284 1.203 1.162 (0.016) 1.227 (0.02) 1.162 (0.019) 1.113 (0.0)Web Traffic Weekly 1.066 1.189 1.207 1.108 1.068 N/A 0.973 (0.022) 0.962 (0.01) 0.938 (0.0)19Table 10: Probabilistic forecast accuracy, as measured by wQL (lower is better). For non-deterministicmethods (DeepAR, TFT, AutoGluon) we report the mean and standard deviation of the scorescomputed over 5 random seeds. "d.n.f." denotes cases where a method did not generate aforecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoGluonCar Parts 1.717 1.589 1.338 1.367 1.324 0.963 (0.009) 0.878 (0.004) 0.923 (0.0)CIF 2016 0.031 0.017 0.039 0.027 0.028 0.114 (0.024) 0.010 (0.002) 0.019 (0.0)COVID 0.140 0.030 0.046 0.094 0.046 0.072 (0.02) 0.031 (0.003) 0.030 (0.0)Electricity Hourly 0.108 d.n.f. 0.100 d.n.f. d.n.f. 0.081 (0.002) 0.097 (0.001) 0.076 (0.0)Electricity Weekly 0.141 0.138 0.144 0.146 0.141 0.123 (0.041) 0.118 (0.011) 0.088 (0.0)FRED-MD 0.104 0.056 0.050 0.057 0.054 0.054 (0.021) 0.114 (0.011) 0.056 (0.0)Hospital 0.062 0.058 0.053 0.055 0.053 0.053 (0.001) 0.054 (0.001) 0.051 (0.0)KDD Cup 2018 0.489 d.n.f. 0.550 0.553 d.n.f. 0.363 (0.014) 0.488 (0.054) 0.323 (0.014)M1 Monthly 0.153 0.146 0.163 0.159 0.152 0.136 (0.008) 0.224 (0.016) 0.135 (0.0)M1 Quarterly 0.119 0.088 0.081 0.082 0.083 0.084 (0.003) 0.093 (0.006) 0.090 (0.0)M1 Yearly 0.184 0.160 0.139 0.137 0.142 0.142 (0.029) 0.127 (0.004) 0.134 (0.001)M3 Monthly 0.124 0.102 0.093 0.095 0.092 0.098 (0.001) 0.109 (0.003) 0.089 (0.0)M3 Other 0.047 0.035 0.032 0.035 0.031 0.036 (0.002) 0.033 (0.001) 0.031 (0.0)M3 Quarterly 0.083 0.079 0.069 0.070 0.068 0.073 (0.001) 0.071 (0.001) 0.065 (0.0)M3 Yearly 0.141 0.162 0.129 0.128 0.128 0.117 (0.002) 0.133 (0.001) 0.114 (0.0)M4 Daily 0.030 0.023 0.025 0.023 0.023 0.023 (0.0) 0.023 (0.0) 0.022 (0.0)M4 Hourly 0.039 0.036 0.070 0.041 0.037 0.065 (0.03) 0.038 (0.002) 0.030 (0.001)M4 Monthly 0.109 0.085 0.085 0.088 0.082 0.092 (0.003) 0.089 (0.001) 0.081 (0.0)M4 Quarterly 0.099 0.082 0.079 0.079 0.076 0.084 (0.005) 0.083 (0.001) 0.075 (0.0)M4 Weekly 0.073 0.050 0.052 0.053 0.050 0.046 (0.001) 0.049 (0.001) 0.041 (0.0)M4 Yearly 0.138 0.130 0.111 0.115 0.109 0.124 (0.006) 0.116 (0.004) 0.104 (0.0)NN5 Daily 0.292 0.169 0.162 0.188 0.164 0.148 (0.002) 0.145 (0.001) 0.140 (0.0)NN5 Weekly 0.142 0.090 0.088 0.090 0.089 0.084 (0.007) 0.085 (0.001) 0.078 (0.0)Pedestrian Counts 0.675 d.n.f. 0.764 d.n.f. d.n.f. 0.230 (0.006) 0.261 (0.008) 0.238 (0.013)Tourism Monthly 0.088 0.095 0.101 0.091 0.085 0.086 (0.005) 0.103 (0.01) 0.083 (0.0)Tourism Quarterly 0.099 0.098 0.070 0.061 0.070 0.068 (0.002) 0.083 (0.005) 0.072 (0.0)Tourism Yearly 0.170 0.156 0.157 0.176 0.155 0.141 (0.016) 0.102 (0.006) 0.152 (0.0)Vehicle Trips 0.112 0.100 0.115 0.120 0.103 0.090 (0.002) 0.099 (0.005) 0.087 (0.0)Web Traffic Weekly 0.936 0.475 8·10130.503 0.474 N/A 0.223 (0.011) 0.225 (0.0)20Table 11: Average run time of each method (in minutes).Dataset SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 0.1 2.4 0.6 0.7 3.3 6.9 9.2 240.3 17.4CIF 2016 0.1 0.4 0.5 0.6 1.3 4.1 6.2 240.2 16.7COVID 0.1 1.4 0.5 0.7 2.3 7.9 8.8 240.4 29.3Electricity Hourly 0.2 >360 21.6 >360 >360 10.4 19.5 240.4 61.2Electricity Weekly 0.2 0.3 0.4 0.5 1.0 3.1 6.6 240.2 14.9FRED-MD 0.1 2.4 0.7 0.6 3.4 6.8 5.5 240.2 16.8Hospital 0.1 0.9 0.7 0.7 2.1 4.6 7.6 240.2 17.4KDD Cup 2018 0.1 >360 16.3 22.8 >360 12.4 11.9 240.3 56.0M1 Monthly 0.1 1.5 0.8 0.7 2.7 5.5 6.2 240.2 21.6M1 Quarterly 0.1 0.3 0.5 0.7 1.3 5.9 5.4 240.2 15.6M1 Yearly 0.1 0.3 0.4 0.4 0.9 4.2 5.2 240.2 12.9M3 Monthly 0.1 4.0 1.0 0.8 5.8 5.1 5.9 240.3 24.2M3 Other 0.1 0.3 0.4 0.4 0.9 5.0 6.0 240.2 13.6M3 Quarterly 0.1 0.5 0.6 0.7 1.6 4.6 6.0 240.3 15.7M3 Yearly 0.1 0.4 0.5 0.4 1.0 5.9 5.4 240.2 12.7M4 Daily 0.2 28.5 33.0 25.3 82.3 6.8 8.4 240.3 68.7M4 Hourly 0.1 84.9 1.8 0.8 89.5 9.2 10.9 240.2 51.2M4 Monthly 0.3 296.0 37.6 7.7 340.3 4.9 7.9 242.0 112.1M4 Quarterly 0.2 15.7 6.2 1.6 23.2 4.7 7.6 240.9 62.3M4 Weekly 0.1 0.6 0.5 1.3 2.2 5.6 7.8 240.3 20.8M4 Yearly 0.2 4.3 0.8 0.7 5.6 4.2 6.1 240.8 35.6NN5 Daily 0.1 2.5 0.5 0.6 3.3 7.3 10.9 240.3 37.4NN5 Weekly 0.1 0.3 0.4 0.4 1.0 3.6 6.4 240.2 13.7Pedestrian Counts 0.1 >360 4.9 >360 >360 13.5 16.7 240.7 56.4Tourism Monthly 0.1 10.2 0.8 0.7 13.1 4.4 7.6 240.2 26.0Tourism Quarterly 0.1 0.9 0.6 0.7 1.8 3.6 6.3 240.2 14.6Tourism Yearly 0.1 0.3 0.4 0.4 1.0 3.5 5.8 240.3 12.4Vehicle Trips 0.1 1.1 0.6 0.7 2.2 5.1 7.3 240.2 16.0Web Traffic Weekly 0.2 42.3 3.7 6.2 52.8 N/A 8.3 260.5 106.021
0p_O_-QT_-
XHIY3cQ8Tew
automl.cc/AutoML/2023/ABCD_Track
2023
AutoGluon–TimeSeries: AutoML for Probabilistic Time Series Forecasting
["Oleksandr Shchur", "Ali Caner Turkmen", "Nick Erickson", "Huibin Shen", "Alexander Shirkov", "Tony Hu", "Bernie Wang"]
We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic time series forecasting. Focused on ease of use and robustness, AutoGluon–TimeSeries enables users to generate accurate point and quantile forecasts with just 3 lines of Python code. Built on the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles of diverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning based forecasting approaches, and ensembling techniques. In our evaluation on 29 benchmark datasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforming a range of forecasting methods in terms of both point and quantile forecast accuracy, and often even improving upon the best-in-hindsight combination of prior methods.
["AutoML", "forecasting", "time series", "probabilistic forecasting"]
AutoGluon–TimeSeries:AutoML for Probabilistic Time Series ForecastingOleksandr Shchur1Caner Turkmen1Nick Erickson1Huibin Shen2Alexander Shirkov1Tony Hu1Yuyang Wang21Amazon Web Services2AWS AI LabsAbstract We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic timeseries forecasting.1Focused on ease of use and robustness, AutoGluon–TimeSeries enablesusers to generate accurate point and quantile forecasts with just 3 lines of Python code. Builton the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles ofdiverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning basedforecasting approaches, and ensembling techniques. In our evaluation on 29 benchmarkdatasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforminga range of forecasting methods in terms of both point and quantile forecast accuracy, andoften even improving upon the best-in-hindsight combination of prior methods.1 IntroductionTime series (TS) forecasting is a fundamental statistical problem with applications in diversedomains such as inventory planning (Syntetos et al., 2009), smart grids (Hong et al., 2020), andepidemiology (Nikolopoulos et al., 2021). Decades of research led to development of variousforecasting approaches, from simple statistical models (Hyndman and Athanasopoulos, 2018) toexpressive deep-learning-based architectures (Benidis et al., 2022). Despite the availability of variousforecasting approaches, practitioners often struggle with selecting the most appropriate methodand adhering to best practices when implementing and evaluating forecasting pipelines.AutoML aims to mitigate these challenges by providing tools that enable practitioners to developaccurate and efficient predictive models without extensive domain knowledge. While traditionalAutoML methods have focused primarily on classification and regression tasks for tabular data(Thornton et al., 2013; Feurer et al., 2015; Olson and Moore, 2016; Erickson et al., 2020; LeDell andPoirier, 2020; Zimmer et al., 2021), automated time series forecasting has received comparativelyless attention, with only a few open-source AutoML forecasting frameworks having been proposed(Deng et al., 2022; Catlin, 2022). Furthermore, existing automated forecasting frameworks tend togenerate point forecasts without considering uncertainty, which is a crucial factor in many practicalapplications (Gneiting and Katzfuss, 2014).To close this gap, we introduce AutoGluon–TimeSeries (AG–TS), an open-source AutoML frame-work for probabilistic time series forecasting written in Python. AG–TS can generate both pointand probabilistic forecasts for collections of univariate time series. Together with support for staticand time-varying covariates, this makes AG–TS applicable to most real-world forecasting tasks.As part of the AutoGluon framework (Erickson et al., 2020; Shi et al., 2021), AG–TS adheres tothe principles of ease of use and robustness, empowering users with limited expertise in the targetdomain to generate highly accurate predictions with minimal coding effort. The architecture is1https://github.com/autogluon/autogluonAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Point forecast (left) and quantile forecast (right) for a univariate time series.capable of handling failures of individual models when necessary, producing a valid result as longas any single model was trained successfully.We evaluate the performance of AG–TS against other established forecasting methods andAutoML systems using 29 publicly available benchmark datasets. The results demonstrate AG–TS’s strong performance, outperforming various competing approaches in terms of both pointand probabilistic forecast accuracy. This highlights the potential of AG–TS as a valuable tool forpractitioners and researchers seeking an automated and versatile solution for time series forecasting.2 Probabilistic Time Series ForecastingThe probabilistic time series forecasting problem can be formally stated as follows. The dataD={yi,1:Ti}Ni=1is a collection of Nunivariate time series, where yi,1:Ti=(yi,1,...,yi,T i),yi,tis thevalue of the i-th time series at time t, andTiis the length of the i-th time series.2For example,yi,tmay correspond to the number of units of product isold on day t. The goal of time seriesforecasting is to predict the future Hvalues for each time series in D. The parameter His knownasprediction length orforecast horizon .Each time series yi,1:Tmay additionally be associated with covariates Xi,1:T+H. These includeboth static covariates (e.g., location of the store, product ID) and time-varying covariates . Thetime-varying covariates may, in turn, be known in the future (e.g., day of the week, promotions) oronly known in the past (e.g., weather, sales of other products).In the most general form, the goal of probabilistic forecasting is to model the conditionaldistribution of the future time series values yi,T+1:T+Hgiven the past values yi,1:Tand the relatedcovariates Xi,1:T+Hp(yi,T+1:T+H|yi,1:T,Xi,1:T+H).In practice, we are rarely interested in the full predictive distribution and rather represent therange of possible outcomes with quantile forecasts ˆyqi,T+1:T+Hfor chosen quantile levels q∈(0,1).The quantile forecast implies that the future time series value yi,T+his predicted to exceed ˆyqi,T+hwith probability q(Wen et al., 2017; Lim et al., 2021).If the uncertainty is of no interest, we can instead report a point forecast of the future timeseries values. For example, we can summarize the prediction using the conditional meanˆyi,T+1:T+H=Ep[yi,T+1:T+H|yi,1:T,Xi,1:T+H].Figure 1 demonstrates the difference between a point forecast and a quantile forecast. Finally, notethat here we consider the problem of forecasting multiple univariate time series, also known aspanel data, which is different from multivariate forecasting (Benidis et al., 2022).2To reduce clutter in notation, we assume that all time series have the same length T(even though AG–TS supportsthe case when time series have different lengths).23 AutoGluon–TimeSeriesAutoGluon–TimeSeries enables users to generate probabilistic time series forecasts in a few linesof code, as shown by the following minimal example.1from autogluon . timeseries import TimeSeriesDataFrame , TimeSeriesPredictor23train_data = TimeSeriesDataFrame . from_path (" train . csv ")4predictor = TimeSeriesPredictor ( prediction_length =30) . fit ( train_data )5predictions = predictor . predict ( train_data ) # forecast next 30 time stepsLoading the data. ATimeSeriesDataFrame object stores a collection of univariate time series andprovides utilities such as loading data from disk and train-test splitting. Internally, time series datais represented as a pandas.DataFrame (pandas development team, 2020) in long format (Table 1),but loaders are also available for other formats. Besides the target time series that need to beforecast, TimeSeriesDataFrame can also store the static and time-varying covariates.Table 1: Collection of univariate time series stored as a TimeSeriesDataFrame . Each row containsunique ID of the time series, timestamp, and the value of the target time series.item_id timestamp targetT1 2020-03-02 23T1 2020-03-03 43·········T999 2020-08-29 15T999 2020-08-31 27Defining the task. Users can specify the forecasting task by creating a TimeSeriesPredictorobject. Task definition includes information such as prediction length , list of quantile levels tobe predicted, and the evaluation metric . The evaluation metric should be chosen based on thedownstream application. For example, mean weighted quantile loss (wQL) measures the accuracy ofquantile forecasts, and mean absolute scaled error (MASE) reports the accuracy of the point forecastrelative to a naive baseline. When creating the predictor, users can also specify what time-varyingcovariates are known in the future—the remainder will be treated as past-only covariates.Fitting the predictor. Inside the fit() method, the predictor preprocesses the data, fits andevaluates various models using cross-validation, optionally performs hyperparameter optimization(HPO) on selected models, and trains an ensemble of the individual forecasting models. By default,AG–TS provides user-friendly presets users can choose from to manage the training time–accuracytradeoff. Advanced users can also explicitly specify the models to use and their hyperparameters,or specify search spaces in which optimal hyperparameters will be searched.Making predictions. After the predictor has been fit, the predict() method can be used to generatepredictions on new data—including time series that haven’t been seen during training. Like theinput data, the predictions are stored in a long-format data frame, where the columns contain themean (expected value) and quantile forecasts at the desired quantile levels (Table 2).Documentation. We provide various additional resources on the official website auto.gluon.ai.These include installation instructions, tutorials, and a cheatsheet summarizing the main features.3.1 Design ConsiderationsAG–TS was launched as a part of the AutoGluon suite (Erickson et al., 2020) in v0.5, building onthe foundation of AutoGluon and borrowing some design elements from other forecasting librarieslike GluonTS (Alexandrov et al., 2020). Since then, AG–TS has evolved into a full solution for timeseries forecasting. Below, we highlight some of AG–TS’s key design principles.3Table 2: Mean and quantile forecasts generated by a TimeSeriesPredictor . The forecasts include thenext prediction_length many time steps of each time series in the dataset.item_id timestamp mean 0.1 0.5 0.9T1 2020-09-01 17 10 16 23T1 2020-09-02 25 15 23 31··················T999 2020-09-29 33 21 33 36T999 2020-09-30 30 24 28 34Ensembles over HPO. AG–TS follows the AutoGluon philosophy, relying on ensembling techniquesinstead of HPO or neural architecture search. The library features a broad selection of modelswhose probabilistic forecasts are combined in an ensemble selection step (Caruana et al., 2004).AG–TS favors broadening the portfolio of forecasters over exploring the hyperparameter space ofany particular model. While AG–TS does support HPO techniques, HPO is excluded from mostpreset configurations to reduce training time and minimize overfitting on the validation data.Presets and default hyperparameters. In order to provide defaults that work well out of the box forusers that are not familiar with forecasting, AG–TS includes various presets —high-level configura-tion options that allow users to trade off between fast training and higher accuracy. AG–TS followsthe convention-over-configuration principle: all models feature default configurations of hyperpa-rameters that are expected to work well given the selected preset. At the same time, advanced usershave an option to manually configure individual models and use the TimeSeriesPredictor as aunified API for training, evaluating and combining various forecasting models (see documentationfor details).Model selection. Time series forecasting introduces unique challenges in model validation andselection. Importantly, as the main aim of the model is to generalize into the future , special carehas to be taken to define validation sets that are held out across time . The AG–TS API is designedwith this consideration. If the user does not explicitly specify a validation set, the library holds thewindow with last prediction_length time steps of each time series as a validation set. Optionally,multiple windows can be used to perform so-called backtesting .3.2 Forecasting ModelsThere are two families of approaches to forecasting in large panels of time series. The first approachis to fit local classical parametric statistical models to each individual time series. A second approachis built on expressive machine-learning-based approaches that are fit globally on all time series atonce. AG–TS features both approaches, incorporating forecasting models from both families andcombining them in an ensemble.Local models. This category contains conventional methods that capture simple patterns liketrend and seasonality. Examples include ARIMA (Box et al., 1970), Theta (Assimakopoulos andNikolopoulos, 2000) and ETS(Hyndman et al., 2008), as well as simple baselines like Seasonal Naive(Hyndman and Athanasopoulos, 2018). AG–TS relies on implementations of these provided byStatsForecast (Garza et al., 2022).The defining characteristic of local models is that a separate model is fit to each individualtime series in the dataset (Januschowski et al., 2020). This means that local models need to be re-fitwhen making predictions for new time series not seen during training. To mitigate this limitation,AG–TS caches the model predictions and parallelizes their fitting across CPU cores using Joblib(Joblib Development Team, 2020).4Global models. Unlike local models, a single global model is fitted to the entire dataset and usedto make predictions for all time series. Global models used by AG–TS can be subdivided intotwo categories: deep learning and tabular models. Deep-learning models such as DeepAR (Salinaset al., 2020), PatchTST (Nie et al., 2023), and Temporal Fusion Transformer (Lim et al., 2021) useneural networks to generate probabilistic forecasts for future data. AG–TS uses PyTorch-baseddeep learning models from GluonTS (Alexandrov et al., 2020). Tabular models like LightGBM (Keet al., 2017) operate by first converting the time series forecasting task into a tabular regressionproblem. This can be done either recursively —by predicting future time series values one at atime—or by directly forecasting all future values simultaneously (Januschowski et al., 2022). AG–TSrelies on regression models provided by AutoGluon–Tabular and uses MLForecast (Nixtla, 2023)for converting them into tabular forecasters.Global models typically provide faster inference compared to local models, since there isno need for re-training at prediction time. This, however, comes at the cost of longer trainingtimes since more parameters need to be estimated. Global models also naturally handle varioustypes of covariates and utilize information present across different time series, which is known ascross-learning (Semenoglou et al., 2021).Ensembling. After AG–TS finishes sequentially fitting the individual models, they are combinedusing 100 steps of the forward selection algorithm (Caruana et al., 2004). The output of the ensembleis a convex combination of the model predictions:ˆyensemblei,T+1:T+H=M∑︁m=1wm·ˆy(m)i,T+1:T+Hsubject towm≥0,M∑︁m=1wm=1,where ˆy(m)i,T+1:T+Hare either point or quantile forecasts generated by each of the Mtrained models.Note that in case of probabilistic forecasting, the ensemble computes a weighted average of thequantile forecasts of the individual models—method known as Vincentization (Ratcliff, 1979).The ensemble weights wmare tuned to optimize the chosen evaluation metric (e.g., wQL,MASE) on the out-of-fold predictions generated using time series cross-validation (Hyndman andAthanasopoulos, 2018). The main advantages of the forward selection algorithm are its simplicity,compatibility with arbitrary evaluation metrics, and the sparsity of the final ensemble.4 Related workTime series forecasting is a challenging task, and the idea of automated forecasting has long intriguedstatistics and ML researchers. An early influential work on automated forecasting was the Rpackageforecast (Hyndman and Khandakar, 2008) that introduced the AutoETS and AutoARIMA models.These models automatically tune their parameters (e.g., trend, seasonality) for each individual timeseries using an in-sample information criterion.The following decade saw the growing focus on deep learning models for time series (Benidiset al., 2022; Wen et al., 2017; Salinas et al., 2020; Lim et al., 2021; Oreshkin et al., 2020). Several workshave explored how such neural-network-based models can be combined with AutoML techniques togenerate automated forecasting solutions (Van Kuppevelt et al., 2020; Shah et al., 2021; Javeri et al.,2021). Another line of research focused on optimizing the entire forecasting pipeline—includingdata preprocessing and feature engineering—not just hyperparameter tuning for individual models(Dahl, 2020; Kurian et al., 2021; da Silva et al., 2022). A recent survey by Meisenbacher et al. (2022)provides an overview of such automated pipelines.Even though AutoML for forecasting is becoming an active research topic, few of the recentdevelopments have found their way from academic papers to software packages. Available open-source AutoML forecasting libraries include AutoPyTorch–Forecasting (Deng et al., 2022), AutoTS(Catlin, 2022) and PyCaret (Ali, 2020). In contrast to these frameworks, AG–TS supports probabilisticforecasting and focuses on ease of use, allowing users to generate forecasts in a few lines of code.55 Experiments5.1 SetupThe goal of our experiments is to evaluate the point and probabilistic forecast accuracy of AG–TS.As baselines, we use various statistical and ML-based forecasting methods.Baseline methods. AutoARIMA ,AutoETS , and AutoTheta are established statistical forecastingmodels that automatically tune model parameters for each time series individually based on aninformation criterion (Hyndman et al., 2008). This means, such models do not require a validationset and use in-sample statistics for model tuning. StatEnsemble is defined by taking the median ofthe predictions of the three statistical models. Such statistical ensembles, despite their simplicity,have been shown to achieve competitive results in forecasting competitions (Makridakis et al.,2018). We use Python implementations of all these methods provided by the StatsForecast library(Garza et al., 2022). We additionally use Seasonal Naive as a sanity-check baseline that all othermethods are compared against (Hyndman and Athanasopoulos, 2018).For ML-based methods, we include two established deep learning forecasting models, DeepAR(Salinas et al., 2020) and Temporal Fusion Transformer (TFT) (Lim et al., 2021). We use the PyTorchimplementations of these models provided by GluonTS (Alexandrov et al., 2020). Finally, we includethe AutoML forecasting framework AutoPyTorch–Forecasting (Deng et al., 2022) to our comparison.AutoPyTorch builds deep learning forecasting models by combining neural architecture search (e.g.,by trying various encoder modules) and hyperparameter optimization (e.g., by tuning the learningrate). The search process is powered by a combination of Bayesian and multi-fidelity optimization.Similar to AutoGluon, the models are combined using ensemble selection (Caruana et al., 2004).Datasets. In our evaluation we use 29 publicly available forecasting benchmark datasets providedvia GluonTS. These include datasets from the Monash Forecasting Repository (Godahewa et al.,2021), such as the M1, M3 and M4 competition data (Makridakis and Hibon, 2000; Makridakis et al.,2018). We selected the datasets from the Monash Repository that contain more than a single timeseries and fewer than 15M total time steps. Our selection of datasets covers various scenarios thatcan be encountered in practice—from small datasets (M1 and M3), to datasets with a few long timeseries (Electricity, Pedestrian Counts) and large collections of medium-sized time series (M4). Acomprehensive list of dataset statistics are provided in Table 8 in the appendix.Configuration. We train the TimeSeriesPredictor from AG–TS with best_quality presets, asthese are designed to produce the most accurate forecasts, and set the time_limit to 4 hours. Notethat the presets were fixed a priori and not optimized using the benchmark datasets. DeepAR andTFT are also trained for up to 4 hours with early stopping on validation loss with patience set to200. For these models, the model checkpoint achieving the best validation loss is used to generatethe test predictions. The time limit for AutoPyTorch is similarly set to 4 hours. We set no time limitfor the remaining statistical models, as they do not support such functionality. In case the runtimeof a single experiment exceeds 6 hours, the job is interrupted and the result is marked as failure.More details about the configuration are available in Appendix A.3.All models are trained using AWS m6i.4xlarge cloud instances (16 vCPU cores, 64 GB RAM). Weuse CPU instances to fairly evaluate the CPU-only baselines, though AG–TS additionally supportsGPU training. Each run is repeated 5 times using different random seeds for non-deterministicmodels. We run all experiments using AutoMLBenchmark (Gijsbers et al., 2022). In the supplement,we provide full configuration details and the scripts for reproducing all experiments.5.2 Forecasting AccuracyWe measure the accuracy of the point forecasts by reporting the mean absolute scaled error(MASE) of all forecasting methods on all benchmark datasets. AG–TS and AutoPyTorch are trained6Table 3: Point forecast accuracy comparison of baseline methods with AutoGluon (based on the MASEmetric) on 29 datasets. Listed are the number datasets where each method produced: lowererror than AutoGluon (Wins), higher error (Losses), error within 0.001 (Ties), error duringprediction (Failures), or the lowest error among all methods (Champion). Average rank andaverage error are computed using the datasets where no method failed. We rescale the errorsfor each dataset between [0,1]to ensure that averaging is meaningful. The final columnreports the win rate versus the Seasonal Naive baseline. Individual results are given in Table 9.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (MASE) - - - 0 19 2.08 0.073 100.0%StatEnsemble 6 20 0 3 3 3.12 0.238 82.8 %AutoPyTorch (MASE) 4 25 0 0 2 4.12 0.257 93.1%AutoETS 4 25 0 0 1 4.64 0.374 75.9 %AutoTheta 4 23 0 2 0 4.92 0.427 72.4 %DeepAR 4 24 0 1 2 5.08 0.434 93.1 %AutoARIMA 4 22 0 3 1 5.92 0.612 79.3 %TFT 2 27 0 0 1 6.12 0.635 75.9 %Table 4: Probabilistic forecast accuracy comparison of each baseline method with AutoGluon (based onthe wQL metric) on 29 datasets. The columns are defined as in Table 3. Results for individualmodels and datasets are given in Table 10.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (wQL) - - - 0 19 1.80 0.086 100.0%StatEnsemble 3 23 0 3 0 3.36 0.330 86.2%DeepAR 5 23 0 1 1 4.08 0.455 89.7%TFT 5 24 0 0 5 4.24 0.487 89.7%AutoETS 3 26 0 0 2 4.40 0.489 69.0%AutoTheta 2 25 0 2 1 5.00 0.545 69.0%AutoARIMA 4 22 0 3 1 5.12 0.641 82.8%to optimize the MASE metric, while all other models are trained using their normal trainingprocedure. We report the aggregate statistics in Table 3, and provide the full results for individualmodels and datasets in Table 9 in the appendix.We measure the accuracy of the probabilistic (quantile) forecasts by reporting the meanweighted quantile loss (wQL) averaged over 9 quantile levels q∈{0.1,0.2,...,0.9}. AG–TS isconfigured to optimize the wQL metric. We exclude AutoPyTorch from this comparison since thisframework does not support probabilistic forecasting. We report the aggregate statistics in Table 4,and provide the full results for individual models and datasets in Table 10 in the appendix.Some of the frameworks failed to generate forecasts on certain datasets. AutoARIMA, AutoThetaand StatEnsemble did not finish training on some datasets (Electricity–Hourly, KDD Cup 2018,and Pedestrian Counts) within 6 hours. This is caused by the poor scaling of these models to verylong time series. DeepAR model fails on one dataset (Web Traffic Weekly) due to numerical errorsencountered during training.Discussion. The results demonstrate that AG–TS outperforms all other frameworks, achieving thebest average rank and rescaled error for both point and probabilistic forecasts, and even beatingthe best-in-hindsight competing method on 19 out of 29 datasets.StatEnsemble places second after AG–TS. The statistical ensemble performs especially well onsmall datasets such as M1 and M3. This demonstrates that in the low-data regime simple approaches,7Figure 2: Total runtime of each framework across all datasets. AutoGluon always completes trainingand prediction under the time limit and achieves a mean runtime of 33 minutes. AutoPyTorchis always trained for the full 4 hour time limit. Statistical models train faster in most cases,but may take an extremely long time to train on datasets with long time series. The runtimesfor individual models and datasets are provided in Table 11.like ensembling by taking the median, may perform better than the learned ensemble selectionstrategy employed by both AutoML frameworks.AutoPyTorch achieves similar performance to StatEnsemble in point forecasting across mostperformance indicators. Interestingly, AG–TS tends to outperform AutoPyTorch on larger datasetslike M4. This means that AG–TS’s strategy of training various light-weight models performs wellin this setting under the limited time budget. Also note, configuring AutoPyTorch requires morecode and domain knowledge, compared to the 3 lines of code necessary to reproduce the aboveresults by AG–TS.Deep learning models DeepAR and TFT perform well in terms of probabilistic forecasting, butfall behind simple statistical approaches in point forecasts. This makes sense, since the objectivefunctions optimized by these deep learning models are designed for probabilistic forecasting.5.3 Runtime ComparisonHigh accuracy is not the only important property of an AutoML system—the ability to generatepredictions in a reasonable amount of time is often necessary in practice. To evaluate the efficiency ofAG–TS, we compare its runtime with other frameworks. We visualize the runtime of each frameworkacross all datasets in Figure 2. Note that here we compare the total runtime defined as the sumof training and prediction times. This reflects the typical forecasting workflow in practice, wherethe forecast is generated once for each time series. Moreover, it’s hard to distinguish between thetraining and prediction time for local models, where a new model is trained for each new time series.AG–TS completes training and prediction under the 4-hour time limit for all 29 datasets, andachieves mean runtime of 33 minutes. While statistical models are faster on average, they can beextremely slow to train on datasets consisting of long time series. For instance, the runtimes ofAutoARIMA, AutoTheta and StatEnsemble exceed 6 hours for 3 datasets with long time series. Thedeep learning models DeepAR and TFT have higher median runtime compared to the statisticalmodels, but never reach the 4 hour time limit due to early stopping. Finally, AutoPyTorch alwaysconsumes the entire 4 hour time budget due to its design.To summarize, AG–TS is able to produce accurate forecasts under mild time budgets. While, onaverage, AG–TS takes more time than the individual models, it produces more accurate forecastsand avoids the extremely long runtimes sometimes exhibited by local models. The results alsodemonstrate that limited training time is better spent training and ensembling many diverse models(as done by AG–TS), rather than hyperparameter tuning a restricted set of models (as done byAutoPyTorch).8Table 5: Ablation study. We compare the point forecast accuracy of AutoGluon, where certain compo-nent models are removed, ensembling is disabled, or the time limit is reduced. All versionsexcept AutoGluon-1h and AutoGluon-10m are trained for 4 hours. The columns are definedand the scores are computed as in Table 3.Framework Champion Average rank Average rescaled errorAutoGluon-1h 19 2.04 0.070AutoGluon-4h 19 2.08 0.073NoStatModels 16 2.12 0.094NoTabularModels 15 2.12 0.085NoDeepModels 15 2.28 0.124AutoGluon-10m 14 2.50 0.099NoEnsemble 7 3.52 0.1775.4 AblationsFinally, we perform ablations to understand the effect of different components on the final perfor-mance. We compare the point forecast accuracy of the TimeSeriesPredictor trained for 4 hourswith MASE evalauation metric (Section 5.2) against several variations with certain disabled com-ponents. First, we exclude some base models from the presets: statistical models ( NoStatModels ),deep learning models ( NoDeepModels ), and tabular models ( NoTabularModels ). We also considerreducing the time limit to 1 hour ( AutoGluon-1h ) or 10 minutes ( AutoGluon-10m ), as well disablingthe final ensembling step ( NoEnsemble ). In the latter case, AG–TS predicts using the model withthe best validation score. The rest of the setup is identical to Section 5.2.Table 5 shows the metrics for the different model variations, each compared to the baselinesfrom Section 5.2. AutoGluon-4h and AutoGluon-1h produce nearly identical results. This isnot surprising, as the 4-hour version finishes training under 1 hour for most datasets (Figure 2).Interestingly, AutoGluon achieves strong results even with a 10-minute time limit, achieving thebest average rank and outperforming the best-in-hindsight model on 14 out of 29 datasets.Removing the ensembling step has the most detrimental effect on the overall accuracy. Thishighlights the importance of ensembling, confirming the findings of other works (Makridakis et al.,2018; Borchert et al., 2022). The ablations also show that all 3 classes of models used by AutoGluonare important for the overall performance, deep learning models being the most critical component.6 Future WorkOur experiments demonstrate the strong forecasting accuracy achieved by AG–TS. Despite theseencouraging initial results, we aim to continue developing the library, adding new functionalityand further boost the forecasting performance. This includes incorporating the various ideas in thespace of AutoML for forecasting (Meisenbacher et al., 2022), with focus on the following directions.Ensembling. Advanced ensembling strategies, such as stacking (Ting and Witten, 1997), lie at thecore of modern high-performing AutoML systems (Erickson et al., 2020). How to best generalizethese techniques to probabilistic forecasting is an active, but still open research question (Gastingeret al., 2021; Wang et al., 2022).Calibration. Many practical tasks require guarantees on the uncertainty estimates associated withthe forecasts. Conformal prediction methods (Stankeviciute et al., 2021; Xu and Xie, 2021) provideone way to obtain such guarantees, and we plan to incorporate them into AG–TS in the future.New problem types. AG–TS supports the most common types of forecasting tasks, such as proba-bilistic forecasting or handling covariates. However, there are several settings that are currently (as9of v0.8) not supported. These include so-called cold-start forecasting (where little historic data isavailable) and generating forecast explanations (Rojat et al., 2021). Another interesting potentialapplication for AG–TS is assisting judgemental forecasting. In this context, AG–TS could serve as a“tool” queried by a large language model (LLM) (Schick et al., 2023) to generate qualitative forecasts.More generally, combinations of LLM with AutoML frameworks are an exciting direction for futurework (Tornede et al., 2023).Scalability. In our experiments we consider datasets with up to ≈107time steps across all time series.Modern applications, however, sometimes require operating on even larger scales. This wouldrequire improving efficiency of existing models and developing new efficient AutoML techniques.7 ConclusionsIn this work, we introduced AutoGluon–TimeSeries, a powerful and user-friendly open-sourceAutoML library for probabilistic time series forecasting. By combining statistical models and deeplearning forecasting approaches with ensembling techniques, AutoGluon–TimeSeries is able toachieve strong empirical results on a range of benchmark datasets. With the ability to generateaccurate point and quantile forecasts with just 3 lines of Python code, this framework is poised tomake time series forecasting more accessible and efficient for a wide range of users.8 Broader Impact StatementAutoGluon–TimeSeries enables users to generate accurate forecasts in a few lines of code. Thisdemocratizes machine learning, lowering the barrier to entry to forecasting for non-experts. Atthe same time, AutoGluon–TimeSeries can be used by experienced users to design highly accurateforecasting pipelines. More accurate forecasts can directly translate to real-world impact in variousdomains. For example, forecasting renewable energy generation is a crucial component of smartgrid management (Tripathy and Prusty, 2021); accurately predicting demand leads to more efficientinventory management and increased revenue (Makridakis et al., 2022).The potential negative impacts of the proposed approach are similar to those of other forecastingmodels. One such danger arises when the limitations of forecasting methods are not taken intoaccount in the context of decision making (e.g., when guiding policy decisions). As forecastingmodels only capture statistical dependencies, they may be misleading when trying to estimateeffects of actions or interventions.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] All claims are supported by the experimental evaluation inSection 5.(b) Did you describe the limitations of your work? [Yes] See Section 6.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 8.(d)Have you read the ethics author’s and review guidelines and ensured that your paper con-forms to them? https://automl.cc/ethics-accessibility/ [Yes] The paper conformsto the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] The paper containsno theoretical results.10(b)Did you include complete proofs of all theoretical results? [N/A] The paper contains notheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] All of the above included in the supplementary material.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] Results are provided in CSV format.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [No]We provide the raw data and describe the procedure in the paper, which should makereproducing the results and figures straightforward.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] The code is properly documented and we made surethat it can be executed in a fresh environment.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes] We use the standard evaluationprotocol: For all datasets, the last prediction_length time steps of each time series areheld out and used to evaluate the forecasts produced by each method. For hyperparameters,see Section A.3.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We carefully made sure that this is the case.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 5.4.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Allmethods use an identical evaluation protocol.(i)Did you compare performance over time? [Yes] We allocate the same runtime budget of 4hours to all methods. An ablation study is performed where the time limit is reduced to 1hour and 10 minutes for AutoGluon.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]For all non-deterministic methods, the experiments are repeated with five random seeds:1,2,3,4,5 .(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] Error metrics produced by all non-deterministic methods include themean and the standard deviation (see Tables 9 and 10).(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [No] These are notavailable for probabilistic time series forecasting.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] The compute infrastructure is describedin Section 5.1. The total runtime of all experiments equals approximately 6000 hours ( ≈#models×# seeds×# of datasets).11(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] We describe the hyperparameter settingsin Appendix A.3, in addition to providing the code that can be used to reproduce the results.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] References for all useddatasets and methods are provided in Section 5.1.(b)Did you mention the license of the assets? [Yes] This paper does not introduce any newpublic assets. The AutoGluon library is released under the Apache 2.0 License.(c)Did you include any new assets either in the supplemental material or as a url? [No] Thispaper does not introduce any new public assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The evaluation was performed using public benchmark datasets.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The evaluation was performed using publicbenchmark datasets.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not use crowdsourcing or conduct research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.ReferencesAlexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T.,Maddix, D. C., Rangapuram, S., Salinas, D., Schulz, J., et al. (2020). GluonTS: Probabilistic andneural time series modeling in Python. The Journal of Machine Learning Research , 21(1):4629–4634.Ali, M. (2020). PyCaret: An open source, low-code machine learning library in Python. https://www.pycaret.org .Assimakopoulos, V. and Nikolopoulos, K. (2000). The Theta model: A decomposition approach toforecasting. International journal of forecasting , 16(4):521–530.Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y., Maddix, D., Turkmen, C., Gasthaus, J.,Bohlke-Schneider, M., Salinas, D., Stella, L., et al. (2022). Deep learning for time series forecasting:Tutorial and literature survey. ACM Computing Surveys , 55(6):1–36.Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., and Günnemann, S. (2022). Multi-objectivemodel selection for time series forecasting. arXiv preprint arXiv:2202.08485 .Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. (1970). Time series analysis: forecastingand control . John Wiley & Sons.12Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from librariesof models. In Proceedings of the twenty-first international conference on Machine learning , page 18.Catlin, C. (2022). AutoTS: Automated time series forecasting. https://github.com/winedarksea/AutoTS .da Silva, F. R., Vieira, A. B., Bernardino, H. S., Alencar, V. A., Pessamilio, L. R., and Barbosa, H.J. C. (2022). Automated machine learning for time series prediction. In 2022 IEEE Congress onEvolutionary Computation (CEC) , pages 1–7. IEEE.Dahl, S. M. J. (2020). TSPO: an autoML approach to time series forecasting . PhD thesis.Deng, D., Karl, F., Hutter, F., Bischl, B., and Lindauer, M. (2022). Efficient automated deep learningfor time series forecasting. In Machine Learning and Knowledge Discovery in Databases: EuropeanConference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III , pages664–680. Springer.Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and accurate AutoML for structured data. arXiv preprint arXiv:2003.06505 .Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. Advances in neural information processing systems , 28.Garza, F., Mergenthaler Canseco, M., Challu, C., and Olivares, K. G. (2022). StatsForecast: Light-ning fast forecasting with statistical and econometric models. https://github.com/Nixtla/statsforecast (v1.15.0).Gastinger, J., Nicolas, S., Stepić, D., Schmidt, M., and Schülke, A. (2021). A study on ensemblelearning for time series forecasting and the need for meta-learning. In 2021 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE.Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J.(2022). AMLB: An AutoML benchmark. arXiv preprint arXiv:2207.12560 .Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and ItsApplication , 1:125–151.Godahewa, R., Bergmeir, C., Webb, G. I., Hyndman, R. J., and Montero-Manso, P. (2021). Monashtime series forecasting archive. In Neural Information Processing Systems Track on Datasets andBenchmarks .Hong, T., Pinson, P., Wang, Y., Weron, R., Yang, D., and Zareipour, H. (2020). Energy forecasting: Areview and outlook. IEEE Open Access Journal of Power and Energy , 7:376–388.Hyndman, R., Koehler, A. B., Ord, J. K., and Snyder, R. D. (2008). Forecasting with exponentialsmoothing: the state space approach . Springer Science & Business Media.Hyndman, R. J. and Athanasopoulos, G. (2018). Forecasting: principles and practice . OTexts.Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: the forecast packagefor R. Journal of statistical software , 27:1–22.Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., and Callot,L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting ,36(1):167–177.13Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H., and Gasthaus, J. (2022). Forecastingwith trees. International Journal of Forecasting , 38(4):1473–1481.Javeri, I. Y., Toutiaee, M., Arpinar, I. B., Miller, J. A., and Miller, T. W. (2021). Improving neuralnetworks for time-series forecasting using data augmentation and AutoML. In 2021 IEEE SeventhInternational Conference on Big Data Computing Service and Applications (BigDataService) , pages1–8. IEEE.Joblib Development Team (2020). Joblib: Running Python functions as pipeline jobs. https://joblib.readthedocs.io/ (v1.2.0).Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). Lightgbm:A highly efficient gradient boosting decision tree. Advances in Neural Information ProcessingSystems , 30.Kurian, J. J., Dix, M., Amihai, I., Ceusters, G., and Prabhune, A. (2021). BOAT: A Bayesian optimiza-tion autoML time-series framework for industrial applications. In 2021 IEEE Seventh InternationalConference on Big Data Computing Service and Applications (BigDataService) , pages 17–24. IEEE.LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable automatic machine learning. In Proceedingsof the AutoML Workshop at ICML , volume 2020.Lim, B., Arık, S. Ö., Loeff, N., and Pfister, T. (2021). Temporal fusion transformers for interpretablemulti-horizon time series forecasting. International Journal of Forecasting , 37(4):1748–1764.Makridakis, S. and Hibon, M. (2000). The M3 competition: Results, conclusions and implications.International journal of forecasting , 16(4):451–476.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 competition: Results, findings,conclusion and way forward. International Journal of Forecasting , 34(4):802–808.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2022). The M5 competition: Background,organization, and implementation. International Journal of Forecasting , 38(4):1325–1336.Meisenbacher, S., Turowski, M., Phipps, K., Rätz, M., Müller, D., Hagenmeyer, V., and Mikut, R.(2022). Review of automated time series forecasting pipelines. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery , 12(6):e1475.Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. (2023). A time series is worth 64 words:Long-term forecasting with transformers. International Conference on Learning Representations .Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., and Vasilakis, C. (2021). Forecasting andplanning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmen-tal decisions. European journal of operational research , 290(1):99–115.Nixtla (2023). MLForecast scalable machine learning for time series forecasting. v0.7.2.Olson, R. S. and Moore, J. H. (2016). TPOT: A tree-based pipeline optimization tool for automatingmachine learning. In Workshop on automatic machine learning , pages 66–74. PMLR.Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. (2020). N-beats: Neural basis expansionanalysis for interpretable time series forecasting.pandas development team (2020). pandas-dev/pandas: Pandas. https://doi.org/10.5281/zenodo.3509134 (v1.5.3).14Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics.Psychological bulletin , 86(3):446.Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and Díaz-Rodríguez, N. (2021). Explainableartificial intelligence (XAI) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 .Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecastingwith autoregressive recurrent networks. International Journal of Forecasting , 36(3):1181–1191.Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., andScialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprintarXiv:2302.04761 .Semenoglou, A.-A., Spiliotis, E., Makridakis, S., and Assimakopoulos, V. (2021). Investigating theaccuracy of cross-learning time series forecasting methods. International Journal of Forecasting ,37(3):1072–1084.Shah, S. Y., Patel, D., Vu, L., Dang, X.-H., Chen, B., Kirchner, P., Samulowitz, H., Wood, D., Bramble,G., Gifford, W. M., et al. (2021). AutoAI-TS: AutoAI for time series forecasting. In Proceedings ofthe 2021 International Conference on Management of Data , pages 2584–2596.Shi, X., Mueller, J., Erickson, N., Li, M., and Smola, A. (2021). Multimodal AutoML on structuredtables with text fields. In 8th ICML Workshop on Automated Machine Learning (AutoML) .Stankeviciute, K., M Alaa, A., and van der Schaar, M. (2021). Conformal time-series forecasting.Advances in Neural Information Processing Systems , 34:6216–6228.Syntetos, A. A., Boylan, J. E., and Disney, S. M. (2009). Forecasting for inventory planning: a 50-yearreview. Journal of the Operational Research Society , 60:S149–S160.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Ting, K. M. and Witten, I. H. (1997). Stacking bagged and dagged models.Tornede, A., Deng, D., Eimer, T., Giovanelli, J., Mohan, A., Ruhkopf, T., Segel, S., Theodorakopoulos,D., Tornede, T., Wachsmuth, H., et al. (2023). AutoML in the age of large language models:Current challenges, future opportunities and risks. arXiv preprint arXiv:2306.08107 .Tripathy, D. S. and Prusty, B. R. (2021). Forecasting of renewable generation for applications insmart grid power systems. In Advances in Smart Grid Power System , pages 265–298. Elsevier.Van Kuppevelt, D., Meijer, C., Huber, F., van der Ploeg, A., Georgievska, S., and van Hees, V. T.(2020). Mcfly: Automated deep learning on time series. SoftwareX , 12:100548.Wang, X., Hyndman, R. J., Li, F., and Kang, Y. (2022). Forecast combinations: an over 50-year review.International Journal of Forecasting .Wen, R., Torkkola, K., Narayanaswamy, B., and Madeka, D. (2017). A multi-horizon quantilerecurrent forecaster. arXiv preprint arXiv:1711.11053 .Xu, C. and Xie, Y. (2021). Conformal prediction interval for dynamic time-series. In InternationalConference on Machine Learning , pages 11559–11569. PMLR.Zimmer, L., Lindauer, M., and Hutter, F. (2021). Auto-PyTorch: Multi-fidelity metalearning forefficient and robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence ,43(9):3079–3090.15A Supplementary MaterialsA.1 Evaluation MetricsMASE. Mean absolute scaled error is the standard metric for evaluating the accuracy of pointforecasts.MASE =1NN∑︁i=11HÍHh=1|yi,T+h−ˆyi,T+h|ÍT−st=1|yi,t+s−yi,t|MASE is scale-invariant and does not suffer from the limitations of other metrics, such as beingundefined when the target time series equals zero (Hyndman and Athanasopoulos, 2018). Wecompute the metric using the median (0.5 quantile) forecast produced by each model.wQL. Weighted quantile loss for a single quantile level qis defined aswQL[q]=2ÍNi=1ÍHh=1hq·max(yi,T+h−ˆyqi,T+h,0)+(1−q)·max(ˆyqi,T+h−yi,T+h,0)iÍNi=1ÍHh=1|yi,T+h|In our experiments, we report the mean wQL averaged over 9 quantile levels Q={0.1,0.2,...,0.9}.wQL =1|Q|∑︁q∈QwQL[q]A.2 ReproducibilityWe ran all experiments using AutoMLBenchmark (Gijsbers et al., 2022). We provide afork of AMLB that includes all scripts necessary to reproduce the results from our pa-per in the following GitHub repository https://github.com/shchur/automlbenchmark/tree/autogluon-timeseries-automl23/autogluon_timeseries_automl23 .A.3 Model ConfigurationWe trained the baseline models DeepAR, TFT, AutoARIMA, AutoETS, AutoTheta with the defaulthyperparameter configurations provided by the respective libraries. For DeepAR and TFT, thelastprediction_length time steps of each time series were reserved as a validation set. Bothmodels were trained for the full duration of 4 hours, saving the parameters and evaluating thevalidation loss at each epoch. The parameters achieving the lowest validation loss were then usedfor prediction. No HPO was performed for these two models, as AutoPyTorch already trains similardeep learning models with HPO.For AutoPyTorch, we used the reference implementation by the authors.3We set the tar-get metric to "mean_MASE_forecasting" ,budget_type="epochs" ,min_budget=5 ,max_budget=50 ,and resampling_strategy=HoldoutValTypes.time_series_hold_out_validation . We also settorch_num_threads to 16 (the number of vCPU cores).In our experiments, we used AG–TS v0.8.2, the latest release at the time of publication. Weused the "best_quality" presets and set eval_metric to either "MASE" or"mean_wQuantileLoss" ,depending on the experiment. All other parameters of the TimeSeriesPredictor were set totheir default values. The "best_quality" presets include the following models: AutoETS, Au-toARIMA, Theta (from StatsForecast), DeepAR, PatchTST, TFT (from GluonTS), DirectTabular,RecursiveTabular (wrappers around AutoGluon–Tabular and MLForecast), plus the baseline meth-ods Naive and SeasonalNaive. The non-default hyperparameters of the individual models used bythebest_quality presets are provided in Table 6.3https://github.com/dengdifan/Auto-PyTorch/blob/ecml22_apt_ts/examples/APT-TS/APT_task.py16The guiding principle for developing the presets for AG–TS can be summarized as “keep defaultswhenever possible, except the cases where the defaults are clearly suboptimal”. For example, wesetallowmean=True for AutoARIMA to allow this model to handle time series with non-zeromean. For deep learning models, we increase the batch size from 32 to 64 since larger batch sizestypically lead to faster convergence for all deep learning models. The context_length is capped ata minimum value because the default setting context_length=prediction_length can result inmodels that ignore most of the history if prediction_length is very short. For PatchTST, we setthecontext_length to the value used in the respective publication (Nie et al., 2023).The versions of frameworks used in our experiments are listed in Table 7.Table 6: Non-default hyperparameters that AutoGluon sets for the underlying models. The remainingparameters are all set to their defaults in the respective libraries. Models not listed here(Naive, SeasonalNaive, AutoETS, DirectTabular, Theta) have all their hyperparameters set tothe default values.Model Hyperparameter ValueAutoARIMA allowmean Trueapproximation TrueDeepAR batch_size 64context_length max(10, 2 * prediction_length)num_samples 250PatchTST batch_size 64context_length 96TFT batch_size 64context_length max(64, 2 * prediction_length)RecursiveTabular tabular_hyperparameters {"GBM", "NN_TORCH"}Table 7: Versions of the frameworks used during evaluation.Framework VersionAutoGluon 0.8.2AutoPyTorch 0.2.1GluonTS 0.13.2MLForecast 0.7.3StatsForecast 1.5.0Python 3.9PyTorch 1.13.1+cpu17Table 8: Statistics of the benchmark datasets used in our experimental evaluation. Frequency isrepresented by pandas offset aliases. Seasonality depends on the frequency, and is used toconfigure statistical models and compute the MASE metric.Dataset # series # time steps Prediction length Frequency SeasonalityCar Parts 2,674 104,286 12 M 12CIF 2016 72 6,244 12 M 12COVID 266 48,412 30 D 7Electricity Hourly 321 8,428,176 48 H 24Electricity Weekly 321 47,508 8 W 1FRED-MD 107 76,612 12 M 12Hospital 767 55,224 12 M 12KDD Cup 2018 270 2,929,404 48 H 24M1 Monthly 617 44,892 18 M 12M1 Quarterly 203 8,320 8 Q 4M1 Yearly 181 3,429 6 Y 1M3 Monthly 1,428 141,858 18 M 12M3 Other 174 11,933 8 Q 1M3 Quarterly 756 30,956 8 Q 4M3 Yearly 645 14,449 6 Y 1M4 Daily 4,227 9,964,658 14 D 7M4 Hourly 414 353,500 48 H 24M4 Monthly 48,000 10,382,411 18 M 12M4 Quarterly 24,000 2,214,108 8 Q 4M4 Weekly 359 366,912 13 W 1M4 Yearly 22,974 707,265 6 Y 1NN5 Daily 111 81,585 56 D 7NN5 Weekly 111 11,655 8 W 1Pedestrian Counts 66 3,129,178 48 H 24Tourism Monthly 366 100,496 24 M 12Tourism Quarterly 427 39,128 8 Q 4Tourism Yearly 518 10,685 4 Y 1Vehicle Trips 262 45,253 7 D 7Web Traffic Weekly 145,063 15,376,678 8 W 118Table 9: Point forecast accuracy, as measured by MASE (lower is better). For non-deterministic methods(DeepAR, TFT, AutoPyTorch, AutoGluon) we report the mean and standard deviation of thescores computed over 5 random seeds. "d.n.f." denotes cases where a method did not generatea forecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 1.127 1.118 1.133 1.208 1.052 0.749 (0.001) 0.751 (0.002) 0.746 (0.0) 0.747 (0.0)CIF 2016 1.289 1.069 0.898 1.006 0.945 1.278 (0.088) 1.372 (0.085) 1.023 (0.069) 1.073 (0.006)COVID 8.977 6.029 5.907 7.719 5.884 7.166 (0.334) 5.192 (0.211) 4.911 (0.086) 5.805 (0.0)Electricity Hourly 1.405 d.n.f. 1.465 d.n.f. d.n.f. 1.251 (0.006) 1.389 (0.025) 1.420 (0.123) 1.227 (0.003)Electricity Weekly 3.037 3.009 3.076 3.113 3.077 2.447 (0.211) 2.861 (0.122) 2.322 (0.277) 1.892 (0.0)FRED-MD 1.101 0.478 0.505 0.564 0.498 0.634 (0.038) 0.901 (0.086) 0.682 (0.058) 0.656 (0.0)Hospital 0.921 0.820 0.766 0.764 0.753 0.771 (0.008) 0.814 (0.012) 0.770 (0.003) 0.741 (0.001)KDD Cup 2018 0.975 d.n.f. 0.988 1.010 d.n.f. 0.841 (0.036) 0.844 (0.065) 0.764 (0.047) 0.709 (0.026)M1 Monthly 1.314 1.152 1.083 1.092 1.045 1.117 (0.029) 1.534 (0.063) 1.278 (0.115) 1.235 (0.001)M1 Quarterly 2.078 1.770 1.665 1.667 1.622 1.742 (0.028) 2.099 (0.108) 1.813 (0.056) 1.615 (0.0)M1 Yearly 4.894 3.870 3.950 3.659 3.769 3.674 (0.161) 4.318 (0.122) 3.407 (0.078) 3.371 (0.007)M3 Monthly 1.146 0.934 0.867 0.855 0.845 0.960 (0.017) 1.062 (0.04) 0.956 (0.083) 0.822 (0.0)M3 Other 3.089 2.245 1.801 2.009 1.769 2.061 (0.182) 1.926 (0.028) 1.871 (0.024) 1.837 (0.004)M3 Quarterly 1.425 1.419 1.121 1.119 1.096 1.198 (0.037) 1.176 (0.036) 1.180 (0.032) 1.057 (0.002)M3 Yearly 3.172 3.159 2.695 2.608 2.627 2.694 (0.096) 2.818 (0.019) 2.691 (0.026) 2.520 (0.002)M4 Daily 1.452 1.153 1.228 1.149 1.145 1.145 (0.026) 1.176 (0.018) 1.152 (0.009) 1.156 (0.0)M4 Hourly 1.193 1.029 1.609 2.456 1.157 1.484 (0.151) 3.391 (0.442) 1.345 (0.404) 0.807 (0.001)M4 Monthly 1.079 0.812 0.803 0.834 0.780 0.933 (0.01) 0.947 (0.005) 0.851 (0.025) 0.782 (0.0)M4 Quarterly 1.602 1.276 1.167 1.183 1.148 1.367 (0.171) 1.277 (0.015) 1.176 (0.022) 1.139 (0.0)M4 Weekly 2.777 2.355 2.548 2.608 2.375 2.418 (0.026) 2.625 (0.038) 2.369 (0.177) 2.035 (0.001)M4 Yearly 3.966 3.720 3.077 3.085 3.032 3.858 (0.694) 3.220 (0.097) 3.093 (0.041) 3.019 (0.001)NN5 Daily 1.011 0.935 0.870 0.878 0.859 0.812 (0.01) 0.789 (0.004) 0.807 (0.021) 0.761 (0.004)NN5 Weekly 1.063 0.998 0.980 0.963 0.977 0.915 (0.085) 0.884 (0.012) 0.865 (0.025) 0.860 (0.0)Pedestrian Counts 0.369 d.n.f. 0.553 d.n.f. d.n.f. 0.309 (0.005) 0.373 (0.01) 0.354 (0.024) 0.312 (0.009)Tourism Monthly 1.631 1.585 1.529 1.666 1.469 1.461 (0.025) 1.719 (0.08) 1.495 (0.009) 1.442 (0.0)Tourism Quarterly 1.699 1.655 1.578 1.648 1.539 1.599 (0.062) 1.830 (0.047) 1.647 (0.034) 1.537 (0.002)Tourism Yearly 3.552 4.044 3.183 2.992 3.231 3.476 (0.165) 2.916 (0.197) 3.004 (0.053) 2.946 (0.007)Vehicle Trips 1.302 1.427 1.301 1.284 1.203 1.162 (0.016) 1.227 (0.02) 1.162 (0.019) 1.113 (0.0)Web Traffic Weekly 1.066 1.189 1.207 1.108 1.068 N/A 0.973 (0.022) 0.962 (0.01) 0.938 (0.0)19Table 10: Probabilistic forecast accuracy, as measured by wQL (lower is better). For non-deterministicmethods (DeepAR, TFT, AutoGluon) we report the mean and standard deviation of the scorescomputed over 5 random seeds. "d.n.f." denotes cases where a method did not generate aforecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoGluonCar Parts 1.717 1.589 1.338 1.367 1.324 0.963 (0.009) 0.878 (0.004) 0.923 (0.0)CIF 2016 0.031 0.017 0.039 0.027 0.028 0.114 (0.024) 0.010 (0.002) 0.019 (0.0)COVID 0.140 0.030 0.046 0.094 0.046 0.072 (0.02) 0.031 (0.003) 0.030 (0.0)Electricity Hourly 0.108 d.n.f. 0.100 d.n.f. d.n.f. 0.081 (0.002) 0.097 (0.001) 0.076 (0.0)Electricity Weekly 0.141 0.138 0.144 0.146 0.141 0.123 (0.041) 0.118 (0.011) 0.088 (0.0)FRED-MD 0.104 0.056 0.050 0.057 0.054 0.054 (0.021) 0.114 (0.011) 0.056 (0.0)Hospital 0.062 0.058 0.053 0.055 0.053 0.053 (0.001) 0.054 (0.001) 0.051 (0.0)KDD Cup 2018 0.489 d.n.f. 0.550 0.553 d.n.f. 0.363 (0.014) 0.488 (0.054) 0.323 (0.014)M1 Monthly 0.153 0.146 0.163 0.159 0.152 0.136 (0.008) 0.224 (0.016) 0.135 (0.0)M1 Quarterly 0.119 0.088 0.081 0.082 0.083 0.084 (0.003) 0.093 (0.006) 0.090 (0.0)M1 Yearly 0.184 0.160 0.139 0.137 0.142 0.142 (0.029) 0.127 (0.004) 0.134 (0.001)M3 Monthly 0.124 0.102 0.093 0.095 0.092 0.098 (0.001) 0.109 (0.003) 0.089 (0.0)M3 Other 0.047 0.035 0.032 0.035 0.031 0.036 (0.002) 0.033 (0.001) 0.031 (0.0)M3 Quarterly 0.083 0.079 0.069 0.070 0.068 0.073 (0.001) 0.071 (0.001) 0.065 (0.0)M3 Yearly 0.141 0.162 0.129 0.128 0.128 0.117 (0.002) 0.133 (0.001) 0.114 (0.0)M4 Daily 0.030 0.023 0.025 0.023 0.023 0.023 (0.0) 0.023 (0.0) 0.022 (0.0)M4 Hourly 0.039 0.036 0.070 0.041 0.037 0.065 (0.03) 0.038 (0.002) 0.030 (0.001)M4 Monthly 0.109 0.085 0.085 0.088 0.082 0.092 (0.003) 0.089 (0.001) 0.081 (0.0)M4 Quarterly 0.099 0.082 0.079 0.079 0.076 0.084 (0.005) 0.083 (0.001) 0.075 (0.0)M4 Weekly 0.073 0.050 0.052 0.053 0.050 0.046 (0.001) 0.049 (0.001) 0.041 (0.0)M4 Yearly 0.138 0.130 0.111 0.115 0.109 0.124 (0.006) 0.116 (0.004) 0.104 (0.0)NN5 Daily 0.292 0.169 0.162 0.188 0.164 0.148 (0.002) 0.145 (0.001) 0.140 (0.0)NN5 Weekly 0.142 0.090 0.088 0.090 0.089 0.084 (0.007) 0.085 (0.001) 0.078 (0.0)Pedestrian Counts 0.675 d.n.f. 0.764 d.n.f. d.n.f. 0.230 (0.006) 0.261 (0.008) 0.238 (0.013)Tourism Monthly 0.088 0.095 0.101 0.091 0.085 0.086 (0.005) 0.103 (0.01) 0.083 (0.0)Tourism Quarterly 0.099 0.098 0.070 0.061 0.070 0.068 (0.002) 0.083 (0.005) 0.072 (0.0)Tourism Yearly 0.170 0.156 0.157 0.176 0.155 0.141 (0.016) 0.102 (0.006) 0.152 (0.0)Vehicle Trips 0.112 0.100 0.115 0.120 0.103 0.090 (0.002) 0.099 (0.005) 0.087 (0.0)Web Traffic Weekly 0.936 0.475 8·10130.503 0.474 N/A 0.223 (0.011) 0.225 (0.0)20Table 11: Average run time of each method (in minutes).Dataset SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 0.1 2.4 0.6 0.7 3.3 6.9 9.2 240.3 17.4CIF 2016 0.1 0.4 0.5 0.6 1.3 4.1 6.2 240.2 16.7COVID 0.1 1.4 0.5 0.7 2.3 7.9 8.8 240.4 29.3Electricity Hourly 0.2 >360 21.6 >360 >360 10.4 19.5 240.4 61.2Electricity Weekly 0.2 0.3 0.4 0.5 1.0 3.1 6.6 240.2 14.9FRED-MD 0.1 2.4 0.7 0.6 3.4 6.8 5.5 240.2 16.8Hospital 0.1 0.9 0.7 0.7 2.1 4.6 7.6 240.2 17.4KDD Cup 2018 0.1 >360 16.3 22.8 >360 12.4 11.9 240.3 56.0M1 Monthly 0.1 1.5 0.8 0.7 2.7 5.5 6.2 240.2 21.6M1 Quarterly 0.1 0.3 0.5 0.7 1.3 5.9 5.4 240.2 15.6M1 Yearly 0.1 0.3 0.4 0.4 0.9 4.2 5.2 240.2 12.9M3 Monthly 0.1 4.0 1.0 0.8 5.8 5.1 5.9 240.3 24.2M3 Other 0.1 0.3 0.4 0.4 0.9 5.0 6.0 240.2 13.6M3 Quarterly 0.1 0.5 0.6 0.7 1.6 4.6 6.0 240.3 15.7M3 Yearly 0.1 0.4 0.5 0.4 1.0 5.9 5.4 240.2 12.7M4 Daily 0.2 28.5 33.0 25.3 82.3 6.8 8.4 240.3 68.7M4 Hourly 0.1 84.9 1.8 0.8 89.5 9.2 10.9 240.2 51.2M4 Monthly 0.3 296.0 37.6 7.7 340.3 4.9 7.9 242.0 112.1M4 Quarterly 0.2 15.7 6.2 1.6 23.2 4.7 7.6 240.9 62.3M4 Weekly 0.1 0.6 0.5 1.3 2.2 5.6 7.8 240.3 20.8M4 Yearly 0.2 4.3 0.8 0.7 5.6 4.2 6.1 240.8 35.6NN5 Daily 0.1 2.5 0.5 0.6 3.3 7.3 10.9 240.3 37.4NN5 Weekly 0.1 0.3 0.4 0.4 1.0 3.6 6.4 240.2 13.7Pedestrian Counts 0.1 >360 4.9 >360 >360 13.5 16.7 240.7 56.4Tourism Monthly 0.1 10.2 0.8 0.7 13.1 4.4 7.6 240.2 26.0Tourism Quarterly 0.1 0.9 0.6 0.7 1.8 3.6 6.3 240.2 14.6Tourism Yearly 0.1 0.3 0.4 0.4 1.0 3.5 5.8 240.3 12.4Vehicle Trips 0.1 1.1 0.6 0.7 2.2 5.1 7.3 240.2 16.0Web Traffic Weekly 0.2 42.3 3.7 6.2 52.8 N/A 8.3 260.5 106.021
xz-6ugtzDiJ
XHIY3cQ8Tew
automl.cc/AutoML/2023/ABCD_Track
2023
AutoGluon–TimeSeries: AutoML for Probabilistic Time Series Forecasting
["Oleksandr Shchur", "Ali Caner Turkmen", "Nick Erickson", "Huibin Shen", "Alexander Shirkov", "Tony Hu", "Bernie Wang"]
We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic time series forecasting. Focused on ease of use and robustness, AutoGluon–TimeSeries enables users to generate accurate point and quantile forecasts with just 3 lines of Python code. Built on the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles of diverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning based forecasting approaches, and ensembling techniques. In our evaluation on 29 benchmark datasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforming a range of forecasting methods in terms of both point and quantile forecast accuracy, and often even improving upon the best-in-hindsight combination of prior methods.
["AutoML", "forecasting", "time series", "probabilistic forecasting"]
AutoGluon–TimeSeries:AutoML for Probabilistic Time Series ForecastingOleksandr Shchur1Caner Turkmen1Nick Erickson1Huibin Shen2Alexander Shirkov1Tony Hu1Yuyang Wang21Amazon Web Services2AWS AI LabsAbstract We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic timeseries forecasting.1Focused on ease of use and robustness, AutoGluon–TimeSeries enablesusers to generate accurate point and quantile forecasts with just 3 lines of Python code. Builton the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles ofdiverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning basedforecasting approaches, and ensembling techniques. In our evaluation on 29 benchmarkdatasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforminga range of forecasting methods in terms of both point and quantile forecast accuracy, andoften even improving upon the best-in-hindsight combination of prior methods.1 IntroductionTime series (TS) forecasting is a fundamental statistical problem with applications in diversedomains such as inventory planning (Syntetos et al., 2009), smart grids (Hong et al., 2020), andepidemiology (Nikolopoulos et al., 2021). Decades of research led to development of variousforecasting approaches, from simple statistical models (Hyndman and Athanasopoulos, 2018) toexpressive deep-learning-based architectures (Benidis et al., 2022). Despite the availability of variousforecasting approaches, practitioners often struggle with selecting the most appropriate methodand adhering to best practices when implementing and evaluating forecasting pipelines.AutoML aims to mitigate these challenges by providing tools that enable practitioners to developaccurate and efficient predictive models without extensive domain knowledge. While traditionalAutoML methods have focused primarily on classification and regression tasks for tabular data(Thornton et al., 2013; Feurer et al., 2015; Olson and Moore, 2016; Erickson et al., 2020; LeDell andPoirier, 2020; Zimmer et al., 2021), automated time series forecasting has received comparativelyless attention, with only a few open-source AutoML forecasting frameworks having been proposed(Deng et al., 2022; Catlin, 2022). Furthermore, existing automated forecasting frameworks tend togenerate point forecasts without considering uncertainty, which is a crucial factor in many practicalapplications (Gneiting and Katzfuss, 2014).To close this gap, we introduce AutoGluon–TimeSeries (AG–TS), an open-source AutoML frame-work for probabilistic time series forecasting written in Python. AG–TS can generate both pointand probabilistic forecasts for collections of univariate time series. Together with support for staticand time-varying covariates, this makes AG–TS applicable to most real-world forecasting tasks.As part of the AutoGluon framework (Erickson et al., 2020; Shi et al., 2021), AG–TS adheres tothe principles of ease of use and robustness, empowering users with limited expertise in the targetdomain to generate highly accurate predictions with minimal coding effort. The architecture is1https://github.com/autogluon/autogluonAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Point forecast (left) and quantile forecast (right) for a univariate time series.capable of handling failures of individual models when necessary, producing a valid result as longas any single model was trained successfully.We evaluate the performance of AG–TS against other established forecasting methods andAutoML systems using 29 publicly available benchmark datasets. The results demonstrate AG–TS’s strong performance, outperforming various competing approaches in terms of both pointand probabilistic forecast accuracy. This highlights the potential of AG–TS as a valuable tool forpractitioners and researchers seeking an automated and versatile solution for time series forecasting.2 Probabilistic Time Series ForecastingThe probabilistic time series forecasting problem can be formally stated as follows. The dataD={yi,1:Ti}Ni=1is a collection of Nunivariate time series, where yi,1:Ti=(yi,1,...,yi,T i),yi,tis thevalue of the i-th time series at time t, andTiis the length of the i-th time series.2For example,yi,tmay correspond to the number of units of product isold on day t. The goal of time seriesforecasting is to predict the future Hvalues for each time series in D. The parameter His knownasprediction length orforecast horizon .Each time series yi,1:Tmay additionally be associated with covariates Xi,1:T+H. These includeboth static covariates (e.g., location of the store, product ID) and time-varying covariates . Thetime-varying covariates may, in turn, be known in the future (e.g., day of the week, promotions) oronly known in the past (e.g., weather, sales of other products).In the most general form, the goal of probabilistic forecasting is to model the conditionaldistribution of the future time series values yi,T+1:T+Hgiven the past values yi,1:Tand the relatedcovariates Xi,1:T+Hp(yi,T+1:T+H|yi,1:T,Xi,1:T+H).In practice, we are rarely interested in the full predictive distribution and rather represent therange of possible outcomes with quantile forecasts ˆyqi,T+1:T+Hfor chosen quantile levels q∈(0,1).The quantile forecast implies that the future time series value yi,T+his predicted to exceed ˆyqi,T+hwith probability q(Wen et al., 2017; Lim et al., 2021).If the uncertainty is of no interest, we can instead report a point forecast of the future timeseries values. For example, we can summarize the prediction using the conditional meanˆyi,T+1:T+H=Ep[yi,T+1:T+H|yi,1:T,Xi,1:T+H].Figure 1 demonstrates the difference between a point forecast and a quantile forecast. Finally, notethat here we consider the problem of forecasting multiple univariate time series, also known aspanel data, which is different from multivariate forecasting (Benidis et al., 2022).2To reduce clutter in notation, we assume that all time series have the same length T(even though AG–TS supportsthe case when time series have different lengths).23 AutoGluon–TimeSeriesAutoGluon–TimeSeries enables users to generate probabilistic time series forecasts in a few linesof code, as shown by the following minimal example.1from autogluon . timeseries import TimeSeriesDataFrame , TimeSeriesPredictor23train_data = TimeSeriesDataFrame . from_path (" train . csv ")4predictor = TimeSeriesPredictor ( prediction_length =30) . fit ( train_data )5predictions = predictor . predict ( train_data ) # forecast next 30 time stepsLoading the data. ATimeSeriesDataFrame object stores a collection of univariate time series andprovides utilities such as loading data from disk and train-test splitting. Internally, time series datais represented as a pandas.DataFrame (pandas development team, 2020) in long format (Table 1),but loaders are also available for other formats. Besides the target time series that need to beforecast, TimeSeriesDataFrame can also store the static and time-varying covariates.Table 1: Collection of univariate time series stored as a TimeSeriesDataFrame . Each row containsunique ID of the time series, timestamp, and the value of the target time series.item_id timestamp targetT1 2020-03-02 23T1 2020-03-03 43·········T999 2020-08-29 15T999 2020-08-31 27Defining the task. Users can specify the forecasting task by creating a TimeSeriesPredictorobject. Task definition includes information such as prediction length , list of quantile levels tobe predicted, and the evaluation metric . The evaluation metric should be chosen based on thedownstream application. For example, mean weighted quantile loss (wQL) measures the accuracy ofquantile forecasts, and mean absolute scaled error (MASE) reports the accuracy of the point forecastrelative to a naive baseline. When creating the predictor, users can also specify what time-varyingcovariates are known in the future—the remainder will be treated as past-only covariates.Fitting the predictor. Inside the fit() method, the predictor preprocesses the data, fits andevaluates various models using cross-validation, optionally performs hyperparameter optimization(HPO) on selected models, and trains an ensemble of the individual forecasting models. By default,AG–TS provides user-friendly presets users can choose from to manage the training time–accuracytradeoff. Advanced users can also explicitly specify the models to use and their hyperparameters,or specify search spaces in which optimal hyperparameters will be searched.Making predictions. After the predictor has been fit, the predict() method can be used to generatepredictions on new data—including time series that haven’t been seen during training. Like theinput data, the predictions are stored in a long-format data frame, where the columns contain themean (expected value) and quantile forecasts at the desired quantile levels (Table 2).Documentation. We provide various additional resources on the official website auto.gluon.ai.These include installation instructions, tutorials, and a cheatsheet summarizing the main features.3.1 Design ConsiderationsAG–TS was launched as a part of the AutoGluon suite (Erickson et al., 2020) in v0.5, building onthe foundation of AutoGluon and borrowing some design elements from other forecasting librarieslike GluonTS (Alexandrov et al., 2020). Since then, AG–TS has evolved into a full solution for timeseries forecasting. Below, we highlight some of AG–TS’s key design principles.3Table 2: Mean and quantile forecasts generated by a TimeSeriesPredictor . The forecasts include thenext prediction_length many time steps of each time series in the dataset.item_id timestamp mean 0.1 0.5 0.9T1 2020-09-01 17 10 16 23T1 2020-09-02 25 15 23 31··················T999 2020-09-29 33 21 33 36T999 2020-09-30 30 24 28 34Ensembles over HPO. AG–TS follows the AutoGluon philosophy, relying on ensembling techniquesinstead of HPO or neural architecture search. The library features a broad selection of modelswhose probabilistic forecasts are combined in an ensemble selection step (Caruana et al., 2004).AG–TS favors broadening the portfolio of forecasters over exploring the hyperparameter space ofany particular model. While AG–TS does support HPO techniques, HPO is excluded from mostpreset configurations to reduce training time and minimize overfitting on the validation data.Presets and default hyperparameters. In order to provide defaults that work well out of the box forusers that are not familiar with forecasting, AG–TS includes various presets —high-level configura-tion options that allow users to trade off between fast training and higher accuracy. AG–TS followsthe convention-over-configuration principle: all models feature default configurations of hyperpa-rameters that are expected to work well given the selected preset. At the same time, advanced usershave an option to manually configure individual models and use the TimeSeriesPredictor as aunified API for training, evaluating and combining various forecasting models (see documentationfor details).Model selection. Time series forecasting introduces unique challenges in model validation andselection. Importantly, as the main aim of the model is to generalize into the future , special carehas to be taken to define validation sets that are held out across time . The AG–TS API is designedwith this consideration. If the user does not explicitly specify a validation set, the library holds thewindow with last prediction_length time steps of each time series as a validation set. Optionally,multiple windows can be used to perform so-called backtesting .3.2 Forecasting ModelsThere are two families of approaches to forecasting in large panels of time series. The first approachis to fit local classical parametric statistical models to each individual time series. A second approachis built on expressive machine-learning-based approaches that are fit globally on all time series atonce. AG–TS features both approaches, incorporating forecasting models from both families andcombining them in an ensemble.Local models. This category contains conventional methods that capture simple patterns liketrend and seasonality. Examples include ARIMA (Box et al., 1970), Theta (Assimakopoulos andNikolopoulos, 2000) and ETS(Hyndman et al., 2008), as well as simple baselines like Seasonal Naive(Hyndman and Athanasopoulos, 2018). AG–TS relies on implementations of these provided byStatsForecast (Garza et al., 2022).The defining characteristic of local models is that a separate model is fit to each individualtime series in the dataset (Januschowski et al., 2020). This means that local models need to be re-fitwhen making predictions for new time series not seen during training. To mitigate this limitation,AG–TS caches the model predictions and parallelizes their fitting across CPU cores using Joblib(Joblib Development Team, 2020).4Global models. Unlike local models, a single global model is fitted to the entire dataset and usedto make predictions for all time series. Global models used by AG–TS can be subdivided intotwo categories: deep learning and tabular models. Deep-learning models such as DeepAR (Salinaset al., 2020), PatchTST (Nie et al., 2023), and Temporal Fusion Transformer (Lim et al., 2021) useneural networks to generate probabilistic forecasts for future data. AG–TS uses PyTorch-baseddeep learning models from GluonTS (Alexandrov et al., 2020). Tabular models like LightGBM (Keet al., 2017) operate by first converting the time series forecasting task into a tabular regressionproblem. This can be done either recursively —by predicting future time series values one at atime—or by directly forecasting all future values simultaneously (Januschowski et al., 2022). AG–TSrelies on regression models provided by AutoGluon–Tabular and uses MLForecast (Nixtla, 2023)for converting them into tabular forecasters.Global models typically provide faster inference compared to local models, since there isno need for re-training at prediction time. This, however, comes at the cost of longer trainingtimes since more parameters need to be estimated. Global models also naturally handle varioustypes of covariates and utilize information present across different time series, which is known ascross-learning (Semenoglou et al., 2021).Ensembling. After AG–TS finishes sequentially fitting the individual models, they are combinedusing 100 steps of the forward selection algorithm (Caruana et al., 2004). The output of the ensembleis a convex combination of the model predictions:ˆyensemblei,T+1:T+H=M∑︁m=1wm·ˆy(m)i,T+1:T+Hsubject towm≥0,M∑︁m=1wm=1,where ˆy(m)i,T+1:T+Hare either point or quantile forecasts generated by each of the Mtrained models.Note that in case of probabilistic forecasting, the ensemble computes a weighted average of thequantile forecasts of the individual models—method known as Vincentization (Ratcliff, 1979).The ensemble weights wmare tuned to optimize the chosen evaluation metric (e.g., wQL,MASE) on the out-of-fold predictions generated using time series cross-validation (Hyndman andAthanasopoulos, 2018). The main advantages of the forward selection algorithm are its simplicity,compatibility with arbitrary evaluation metrics, and the sparsity of the final ensemble.4 Related workTime series forecasting is a challenging task, and the idea of automated forecasting has long intriguedstatistics and ML researchers. An early influential work on automated forecasting was the Rpackageforecast (Hyndman and Khandakar, 2008) that introduced the AutoETS and AutoARIMA models.These models automatically tune their parameters (e.g., trend, seasonality) for each individual timeseries using an in-sample information criterion.The following decade saw the growing focus on deep learning models for time series (Benidiset al., 2022; Wen et al., 2017; Salinas et al., 2020; Lim et al., 2021; Oreshkin et al., 2020). Several workshave explored how such neural-network-based models can be combined with AutoML techniques togenerate automated forecasting solutions (Van Kuppevelt et al., 2020; Shah et al., 2021; Javeri et al.,2021). Another line of research focused on optimizing the entire forecasting pipeline—includingdata preprocessing and feature engineering—not just hyperparameter tuning for individual models(Dahl, 2020; Kurian et al., 2021; da Silva et al., 2022). A recent survey by Meisenbacher et al. (2022)provides an overview of such automated pipelines.Even though AutoML for forecasting is becoming an active research topic, few of the recentdevelopments have found their way from academic papers to software packages. Available open-source AutoML forecasting libraries include AutoPyTorch–Forecasting (Deng et al., 2022), AutoTS(Catlin, 2022) and PyCaret (Ali, 2020). In contrast to these frameworks, AG–TS supports probabilisticforecasting and focuses on ease of use, allowing users to generate forecasts in a few lines of code.55 Experiments5.1 SetupThe goal of our experiments is to evaluate the point and probabilistic forecast accuracy of AG–TS.As baselines, we use various statistical and ML-based forecasting methods.Baseline methods. AutoARIMA ,AutoETS , and AutoTheta are established statistical forecastingmodels that automatically tune model parameters for each time series individually based on aninformation criterion (Hyndman et al., 2008). This means, such models do not require a validationset and use in-sample statistics for model tuning. StatEnsemble is defined by taking the median ofthe predictions of the three statistical models. Such statistical ensembles, despite their simplicity,have been shown to achieve competitive results in forecasting competitions (Makridakis et al.,2018). We use Python implementations of all these methods provided by the StatsForecast library(Garza et al., 2022). We additionally use Seasonal Naive as a sanity-check baseline that all othermethods are compared against (Hyndman and Athanasopoulos, 2018).For ML-based methods, we include two established deep learning forecasting models, DeepAR(Salinas et al., 2020) and Temporal Fusion Transformer (TFT) (Lim et al., 2021). We use the PyTorchimplementations of these models provided by GluonTS (Alexandrov et al., 2020). Finally, we includethe AutoML forecasting framework AutoPyTorch–Forecasting (Deng et al., 2022) to our comparison.AutoPyTorch builds deep learning forecasting models by combining neural architecture search (e.g.,by trying various encoder modules) and hyperparameter optimization (e.g., by tuning the learningrate). The search process is powered by a combination of Bayesian and multi-fidelity optimization.Similar to AutoGluon, the models are combined using ensemble selection (Caruana et al., 2004).Datasets. In our evaluation we use 29 publicly available forecasting benchmark datasets providedvia GluonTS. These include datasets from the Monash Forecasting Repository (Godahewa et al.,2021), such as the M1, M3 and M4 competition data (Makridakis and Hibon, 2000; Makridakis et al.,2018). We selected the datasets from the Monash Repository that contain more than a single timeseries and fewer than 15M total time steps. Our selection of datasets covers various scenarios thatcan be encountered in practice—from small datasets (M1 and M3), to datasets with a few long timeseries (Electricity, Pedestrian Counts) and large collections of medium-sized time series (M4). Acomprehensive list of dataset statistics are provided in Table 8 in the appendix.Configuration. We train the TimeSeriesPredictor from AG–TS with best_quality presets, asthese are designed to produce the most accurate forecasts, and set the time_limit to 4 hours. Notethat the presets were fixed a priori and not optimized using the benchmark datasets. DeepAR andTFT are also trained for up to 4 hours with early stopping on validation loss with patience set to200. For these models, the model checkpoint achieving the best validation loss is used to generatethe test predictions. The time limit for AutoPyTorch is similarly set to 4 hours. We set no time limitfor the remaining statistical models, as they do not support such functionality. In case the runtimeof a single experiment exceeds 6 hours, the job is interrupted and the result is marked as failure.More details about the configuration are available in Appendix A.3.All models are trained using AWS m6i.4xlarge cloud instances (16 vCPU cores, 64 GB RAM). Weuse CPU instances to fairly evaluate the CPU-only baselines, though AG–TS additionally supportsGPU training. Each run is repeated 5 times using different random seeds for non-deterministicmodels. We run all experiments using AutoMLBenchmark (Gijsbers et al., 2022). In the supplement,we provide full configuration details and the scripts for reproducing all experiments.5.2 Forecasting AccuracyWe measure the accuracy of the point forecasts by reporting the mean absolute scaled error(MASE) of all forecasting methods on all benchmark datasets. AG–TS and AutoPyTorch are trained6Table 3: Point forecast accuracy comparison of baseline methods with AutoGluon (based on the MASEmetric) on 29 datasets. Listed are the number datasets where each method produced: lowererror than AutoGluon (Wins), higher error (Losses), error within 0.001 (Ties), error duringprediction (Failures), or the lowest error among all methods (Champion). Average rank andaverage error are computed using the datasets where no method failed. We rescale the errorsfor each dataset between [0,1]to ensure that averaging is meaningful. The final columnreports the win rate versus the Seasonal Naive baseline. Individual results are given in Table 9.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (MASE) - - - 0 19 2.08 0.073 100.0%StatEnsemble 6 20 0 3 3 3.12 0.238 82.8 %AutoPyTorch (MASE) 4 25 0 0 2 4.12 0.257 93.1%AutoETS 4 25 0 0 1 4.64 0.374 75.9 %AutoTheta 4 23 0 2 0 4.92 0.427 72.4 %DeepAR 4 24 0 1 2 5.08 0.434 93.1 %AutoARIMA 4 22 0 3 1 5.92 0.612 79.3 %TFT 2 27 0 0 1 6.12 0.635 75.9 %Table 4: Probabilistic forecast accuracy comparison of each baseline method with AutoGluon (based onthe wQL metric) on 29 datasets. The columns are defined as in Table 3. Results for individualmodels and datasets are given in Table 10.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (wQL) - - - 0 19 1.80 0.086 100.0%StatEnsemble 3 23 0 3 0 3.36 0.330 86.2%DeepAR 5 23 0 1 1 4.08 0.455 89.7%TFT 5 24 0 0 5 4.24 0.487 89.7%AutoETS 3 26 0 0 2 4.40 0.489 69.0%AutoTheta 2 25 0 2 1 5.00 0.545 69.0%AutoARIMA 4 22 0 3 1 5.12 0.641 82.8%to optimize the MASE metric, while all other models are trained using their normal trainingprocedure. We report the aggregate statistics in Table 3, and provide the full results for individualmodels and datasets in Table 9 in the appendix.We measure the accuracy of the probabilistic (quantile) forecasts by reporting the meanweighted quantile loss (wQL) averaged over 9 quantile levels q∈{0.1,0.2,...,0.9}. AG–TS isconfigured to optimize the wQL metric. We exclude AutoPyTorch from this comparison since thisframework does not support probabilistic forecasting. We report the aggregate statistics in Table 4,and provide the full results for individual models and datasets in Table 10 in the appendix.Some of the frameworks failed to generate forecasts on certain datasets. AutoARIMA, AutoThetaand StatEnsemble did not finish training on some datasets (Electricity–Hourly, KDD Cup 2018,and Pedestrian Counts) within 6 hours. This is caused by the poor scaling of these models to verylong time series. DeepAR model fails on one dataset (Web Traffic Weekly) due to numerical errorsencountered during training.Discussion. The results demonstrate that AG–TS outperforms all other frameworks, achieving thebest average rank and rescaled error for both point and probabilistic forecasts, and even beatingthe best-in-hindsight competing method on 19 out of 29 datasets.StatEnsemble places second after AG–TS. The statistical ensemble performs especially well onsmall datasets such as M1 and M3. This demonstrates that in the low-data regime simple approaches,7Figure 2: Total runtime of each framework across all datasets. AutoGluon always completes trainingand prediction under the time limit and achieves a mean runtime of 33 minutes. AutoPyTorchis always trained for the full 4 hour time limit. Statistical models train faster in most cases,but may take an extremely long time to train on datasets with long time series. The runtimesfor individual models and datasets are provided in Table 11.like ensembling by taking the median, may perform better than the learned ensemble selectionstrategy employed by both AutoML frameworks.AutoPyTorch achieves similar performance to StatEnsemble in point forecasting across mostperformance indicators. Interestingly, AG–TS tends to outperform AutoPyTorch on larger datasetslike M4. This means that AG–TS’s strategy of training various light-weight models performs wellin this setting under the limited time budget. Also note, configuring AutoPyTorch requires morecode and domain knowledge, compared to the 3 lines of code necessary to reproduce the aboveresults by AG–TS.Deep learning models DeepAR and TFT perform well in terms of probabilistic forecasting, butfall behind simple statistical approaches in point forecasts. This makes sense, since the objectivefunctions optimized by these deep learning models are designed for probabilistic forecasting.5.3 Runtime ComparisonHigh accuracy is not the only important property of an AutoML system—the ability to generatepredictions in a reasonable amount of time is often necessary in practice. To evaluate the efficiency ofAG–TS, we compare its runtime with other frameworks. We visualize the runtime of each frameworkacross all datasets in Figure 2. Note that here we compare the total runtime defined as the sumof training and prediction times. This reflects the typical forecasting workflow in practice, wherethe forecast is generated once for each time series. Moreover, it’s hard to distinguish between thetraining and prediction time for local models, where a new model is trained for each new time series.AG–TS completes training and prediction under the 4-hour time limit for all 29 datasets, andachieves mean runtime of 33 minutes. While statistical models are faster on average, they can beextremely slow to train on datasets consisting of long time series. For instance, the runtimes ofAutoARIMA, AutoTheta and StatEnsemble exceed 6 hours for 3 datasets with long time series. Thedeep learning models DeepAR and TFT have higher median runtime compared to the statisticalmodels, but never reach the 4 hour time limit due to early stopping. Finally, AutoPyTorch alwaysconsumes the entire 4 hour time budget due to its design.To summarize, AG–TS is able to produce accurate forecasts under mild time budgets. While, onaverage, AG–TS takes more time than the individual models, it produces more accurate forecastsand avoids the extremely long runtimes sometimes exhibited by local models. The results alsodemonstrate that limited training time is better spent training and ensembling many diverse models(as done by AG–TS), rather than hyperparameter tuning a restricted set of models (as done byAutoPyTorch).8Table 5: Ablation study. We compare the point forecast accuracy of AutoGluon, where certain compo-nent models are removed, ensembling is disabled, or the time limit is reduced. All versionsexcept AutoGluon-1h and AutoGluon-10m are trained for 4 hours. The columns are definedand the scores are computed as in Table 3.Framework Champion Average rank Average rescaled errorAutoGluon-1h 19 2.04 0.070AutoGluon-4h 19 2.08 0.073NoStatModels 16 2.12 0.094NoTabularModels 15 2.12 0.085NoDeepModels 15 2.28 0.124AutoGluon-10m 14 2.50 0.099NoEnsemble 7 3.52 0.1775.4 AblationsFinally, we perform ablations to understand the effect of different components on the final perfor-mance. We compare the point forecast accuracy of the TimeSeriesPredictor trained for 4 hourswith MASE evalauation metric (Section 5.2) against several variations with certain disabled com-ponents. First, we exclude some base models from the presets: statistical models ( NoStatModels ),deep learning models ( NoDeepModels ), and tabular models ( NoTabularModels ). We also considerreducing the time limit to 1 hour ( AutoGluon-1h ) or 10 minutes ( AutoGluon-10m ), as well disablingthe final ensembling step ( NoEnsemble ). In the latter case, AG–TS predicts using the model withthe best validation score. The rest of the setup is identical to Section 5.2.Table 5 shows the metrics for the different model variations, each compared to the baselinesfrom Section 5.2. AutoGluon-4h and AutoGluon-1h produce nearly identical results. This isnot surprising, as the 4-hour version finishes training under 1 hour for most datasets (Figure 2).Interestingly, AutoGluon achieves strong results even with a 10-minute time limit, achieving thebest average rank and outperforming the best-in-hindsight model on 14 out of 29 datasets.Removing the ensembling step has the most detrimental effect on the overall accuracy. Thishighlights the importance of ensembling, confirming the findings of other works (Makridakis et al.,2018; Borchert et al., 2022). The ablations also show that all 3 classes of models used by AutoGluonare important for the overall performance, deep learning models being the most critical component.6 Future WorkOur experiments demonstrate the strong forecasting accuracy achieved by AG–TS. Despite theseencouraging initial results, we aim to continue developing the library, adding new functionalityand further boost the forecasting performance. This includes incorporating the various ideas in thespace of AutoML for forecasting (Meisenbacher et al., 2022), with focus on the following directions.Ensembling. Advanced ensembling strategies, such as stacking (Ting and Witten, 1997), lie at thecore of modern high-performing AutoML systems (Erickson et al., 2020). How to best generalizethese techniques to probabilistic forecasting is an active, but still open research question (Gastingeret al., 2021; Wang et al., 2022).Calibration. Many practical tasks require guarantees on the uncertainty estimates associated withthe forecasts. Conformal prediction methods (Stankeviciute et al., 2021; Xu and Xie, 2021) provideone way to obtain such guarantees, and we plan to incorporate them into AG–TS in the future.New problem types. AG–TS supports the most common types of forecasting tasks, such as proba-bilistic forecasting or handling covariates. However, there are several settings that are currently (as9of v0.8) not supported. These include so-called cold-start forecasting (where little historic data isavailable) and generating forecast explanations (Rojat et al., 2021). Another interesting potentialapplication for AG–TS is assisting judgemental forecasting. In this context, AG–TS could serve as a“tool” queried by a large language model (LLM) (Schick et al., 2023) to generate qualitative forecasts.More generally, combinations of LLM with AutoML frameworks are an exciting direction for futurework (Tornede et al., 2023).Scalability. In our experiments we consider datasets with up to ≈107time steps across all time series.Modern applications, however, sometimes require operating on even larger scales. This wouldrequire improving efficiency of existing models and developing new efficient AutoML techniques.7 ConclusionsIn this work, we introduced AutoGluon–TimeSeries, a powerful and user-friendly open-sourceAutoML library for probabilistic time series forecasting. By combining statistical models and deeplearning forecasting approaches with ensembling techniques, AutoGluon–TimeSeries is able toachieve strong empirical results on a range of benchmark datasets. With the ability to generateaccurate point and quantile forecasts with just 3 lines of Python code, this framework is poised tomake time series forecasting more accessible and efficient for a wide range of users.8 Broader Impact StatementAutoGluon–TimeSeries enables users to generate accurate forecasts in a few lines of code. Thisdemocratizes machine learning, lowering the barrier to entry to forecasting for non-experts. Atthe same time, AutoGluon–TimeSeries can be used by experienced users to design highly accurateforecasting pipelines. More accurate forecasts can directly translate to real-world impact in variousdomains. For example, forecasting renewable energy generation is a crucial component of smartgrid management (Tripathy and Prusty, 2021); accurately predicting demand leads to more efficientinventory management and increased revenue (Makridakis et al., 2022).The potential negative impacts of the proposed approach are similar to those of other forecastingmodels. One such danger arises when the limitations of forecasting methods are not taken intoaccount in the context of decision making (e.g., when guiding policy decisions). As forecastingmodels only capture statistical dependencies, they may be misleading when trying to estimateeffects of actions or interventions.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] All claims are supported by the experimental evaluation inSection 5.(b) Did you describe the limitations of your work? [Yes] See Section 6.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 8.(d)Have you read the ethics author’s and review guidelines and ensured that your paper con-forms to them? https://automl.cc/ethics-accessibility/ [Yes] The paper conformsto the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] The paper containsno theoretical results.10(b)Did you include complete proofs of all theoretical results? [N/A] The paper contains notheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] All of the above included in the supplementary material.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] Results are provided in CSV format.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [No]We provide the raw data and describe the procedure in the paper, which should makereproducing the results and figures straightforward.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] The code is properly documented and we made surethat it can be executed in a fresh environment.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes] We use the standard evaluationprotocol: For all datasets, the last prediction_length time steps of each time series areheld out and used to evaluate the forecasts produced by each method. For hyperparameters,see Section A.3.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We carefully made sure that this is the case.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 5.4.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Allmethods use an identical evaluation protocol.(i)Did you compare performance over time? [Yes] We allocate the same runtime budget of 4hours to all methods. An ablation study is performed where the time limit is reduced to 1hour and 10 minutes for AutoGluon.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]For all non-deterministic methods, the experiments are repeated with five random seeds:1,2,3,4,5 .(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] Error metrics produced by all non-deterministic methods include themean and the standard deviation (see Tables 9 and 10).(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [No] These are notavailable for probabilistic time series forecasting.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] The compute infrastructure is describedin Section 5.1. The total runtime of all experiments equals approximately 6000 hours ( ≈#models×# seeds×# of datasets).11(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] We describe the hyperparameter settingsin Appendix A.3, in addition to providing the code that can be used to reproduce the results.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] References for all useddatasets and methods are provided in Section 5.1.(b)Did you mention the license of the assets? [Yes] This paper does not introduce any newpublic assets. The AutoGluon library is released under the Apache 2.0 License.(c)Did you include any new assets either in the supplemental material or as a url? [No] Thispaper does not introduce any new public assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The evaluation was performed using public benchmark datasets.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The evaluation was performed using publicbenchmark datasets.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not use crowdsourcing or conduct research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.ReferencesAlexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T.,Maddix, D. C., Rangapuram, S., Salinas, D., Schulz, J., et al. (2020). GluonTS: Probabilistic andneural time series modeling in Python. The Journal of Machine Learning Research , 21(1):4629–4634.Ali, M. (2020). PyCaret: An open source, low-code machine learning library in Python. https://www.pycaret.org .Assimakopoulos, V. and Nikolopoulos, K. (2000). The Theta model: A decomposition approach toforecasting. International journal of forecasting , 16(4):521–530.Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y., Maddix, D., Turkmen, C., Gasthaus, J.,Bohlke-Schneider, M., Salinas, D., Stella, L., et al. (2022). Deep learning for time series forecasting:Tutorial and literature survey. ACM Computing Surveys , 55(6):1–36.Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., and Günnemann, S. (2022). Multi-objectivemodel selection for time series forecasting. arXiv preprint arXiv:2202.08485 .Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. (1970). Time series analysis: forecastingand control . John Wiley & Sons.12Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from librariesof models. In Proceedings of the twenty-first international conference on Machine learning , page 18.Catlin, C. (2022). AutoTS: Automated time series forecasting. https://github.com/winedarksea/AutoTS .da Silva, F. R., Vieira, A. B., Bernardino, H. S., Alencar, V. A., Pessamilio, L. R., and Barbosa, H.J. C. (2022). Automated machine learning for time series prediction. In 2022 IEEE Congress onEvolutionary Computation (CEC) , pages 1–7. IEEE.Dahl, S. M. J. (2020). TSPO: an autoML approach to time series forecasting . PhD thesis.Deng, D., Karl, F., Hutter, F., Bischl, B., and Lindauer, M. (2022). Efficient automated deep learningfor time series forecasting. In Machine Learning and Knowledge Discovery in Databases: EuropeanConference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III , pages664–680. Springer.Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and accurate AutoML for structured data. arXiv preprint arXiv:2003.06505 .Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. Advances in neural information processing systems , 28.Garza, F., Mergenthaler Canseco, M., Challu, C., and Olivares, K. G. (2022). StatsForecast: Light-ning fast forecasting with statistical and econometric models. https://github.com/Nixtla/statsforecast (v1.15.0).Gastinger, J., Nicolas, S., Stepić, D., Schmidt, M., and Schülke, A. (2021). A study on ensemblelearning for time series forecasting and the need for meta-learning. In 2021 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE.Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J.(2022). AMLB: An AutoML benchmark. arXiv preprint arXiv:2207.12560 .Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and ItsApplication , 1:125–151.Godahewa, R., Bergmeir, C., Webb, G. I., Hyndman, R. J., and Montero-Manso, P. (2021). Monashtime series forecasting archive. In Neural Information Processing Systems Track on Datasets andBenchmarks .Hong, T., Pinson, P., Wang, Y., Weron, R., Yang, D., and Zareipour, H. (2020). Energy forecasting: Areview and outlook. IEEE Open Access Journal of Power and Energy , 7:376–388.Hyndman, R., Koehler, A. B., Ord, J. K., and Snyder, R. D. (2008). Forecasting with exponentialsmoothing: the state space approach . Springer Science & Business Media.Hyndman, R. J. and Athanasopoulos, G. (2018). Forecasting: principles and practice . OTexts.Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: the forecast packagefor R. Journal of statistical software , 27:1–22.Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., and Callot,L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting ,36(1):167–177.13Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H., and Gasthaus, J. (2022). Forecastingwith trees. International Journal of Forecasting , 38(4):1473–1481.Javeri, I. Y., Toutiaee, M., Arpinar, I. B., Miller, J. A., and Miller, T. W. (2021). Improving neuralnetworks for time-series forecasting using data augmentation and AutoML. In 2021 IEEE SeventhInternational Conference on Big Data Computing Service and Applications (BigDataService) , pages1–8. IEEE.Joblib Development Team (2020). Joblib: Running Python functions as pipeline jobs. https://joblib.readthedocs.io/ (v1.2.0).Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). Lightgbm:A highly efficient gradient boosting decision tree. Advances in Neural Information ProcessingSystems , 30.Kurian, J. J., Dix, M., Amihai, I., Ceusters, G., and Prabhune, A. (2021). BOAT: A Bayesian optimiza-tion autoML time-series framework for industrial applications. In 2021 IEEE Seventh InternationalConference on Big Data Computing Service and Applications (BigDataService) , pages 17–24. IEEE.LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable automatic machine learning. In Proceedingsof the AutoML Workshop at ICML , volume 2020.Lim, B., Arık, S. Ö., Loeff, N., and Pfister, T. (2021). Temporal fusion transformers for interpretablemulti-horizon time series forecasting. International Journal of Forecasting , 37(4):1748–1764.Makridakis, S. and Hibon, M. (2000). The M3 competition: Results, conclusions and implications.International journal of forecasting , 16(4):451–476.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 competition: Results, findings,conclusion and way forward. International Journal of Forecasting , 34(4):802–808.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2022). The M5 competition: Background,organization, and implementation. International Journal of Forecasting , 38(4):1325–1336.Meisenbacher, S., Turowski, M., Phipps, K., Rätz, M., Müller, D., Hagenmeyer, V., and Mikut, R.(2022). Review of automated time series forecasting pipelines. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery , 12(6):e1475.Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. (2023). A time series is worth 64 words:Long-term forecasting with transformers. International Conference on Learning Representations .Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., and Vasilakis, C. (2021). Forecasting andplanning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmen-tal decisions. European journal of operational research , 290(1):99–115.Nixtla (2023). MLForecast scalable machine learning for time series forecasting. v0.7.2.Olson, R. S. and Moore, J. H. (2016). TPOT: A tree-based pipeline optimization tool for automatingmachine learning. In Workshop on automatic machine learning , pages 66–74. PMLR.Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. (2020). N-beats: Neural basis expansionanalysis for interpretable time series forecasting.pandas development team (2020). pandas-dev/pandas: Pandas. https://doi.org/10.5281/zenodo.3509134 (v1.5.3).14Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics.Psychological bulletin , 86(3):446.Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and Díaz-Rodríguez, N. (2021). Explainableartificial intelligence (XAI) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 .Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecastingwith autoregressive recurrent networks. International Journal of Forecasting , 36(3):1181–1191.Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., andScialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprintarXiv:2302.04761 .Semenoglou, A.-A., Spiliotis, E., Makridakis, S., and Assimakopoulos, V. (2021). Investigating theaccuracy of cross-learning time series forecasting methods. International Journal of Forecasting ,37(3):1072–1084.Shah, S. Y., Patel, D., Vu, L., Dang, X.-H., Chen, B., Kirchner, P., Samulowitz, H., Wood, D., Bramble,G., Gifford, W. M., et al. (2021). AutoAI-TS: AutoAI for time series forecasting. In Proceedings ofthe 2021 International Conference on Management of Data , pages 2584–2596.Shi, X., Mueller, J., Erickson, N., Li, M., and Smola, A. (2021). Multimodal AutoML on structuredtables with text fields. In 8th ICML Workshop on Automated Machine Learning (AutoML) .Stankeviciute, K., M Alaa, A., and van der Schaar, M. (2021). Conformal time-series forecasting.Advances in Neural Information Processing Systems , 34:6216–6228.Syntetos, A. A., Boylan, J. E., and Disney, S. M. (2009). Forecasting for inventory planning: a 50-yearreview. Journal of the Operational Research Society , 60:S149–S160.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Ting, K. M. and Witten, I. H. (1997). Stacking bagged and dagged models.Tornede, A., Deng, D., Eimer, T., Giovanelli, J., Mohan, A., Ruhkopf, T., Segel, S., Theodorakopoulos,D., Tornede, T., Wachsmuth, H., et al. (2023). AutoML in the age of large language models:Current challenges, future opportunities and risks. arXiv preprint arXiv:2306.08107 .Tripathy, D. S. and Prusty, B. R. (2021). Forecasting of renewable generation for applications insmart grid power systems. In Advances in Smart Grid Power System , pages 265–298. Elsevier.Van Kuppevelt, D., Meijer, C., Huber, F., van der Ploeg, A., Georgievska, S., and van Hees, V. T.(2020). Mcfly: Automated deep learning on time series. SoftwareX , 12:100548.Wang, X., Hyndman, R. J., Li, F., and Kang, Y. (2022). Forecast combinations: an over 50-year review.International Journal of Forecasting .Wen, R., Torkkola, K., Narayanaswamy, B., and Madeka, D. (2017). A multi-horizon quantilerecurrent forecaster. arXiv preprint arXiv:1711.11053 .Xu, C. and Xie, Y. (2021). Conformal prediction interval for dynamic time-series. In InternationalConference on Machine Learning , pages 11559–11569. PMLR.Zimmer, L., Lindauer, M., and Hutter, F. (2021). Auto-PyTorch: Multi-fidelity metalearning forefficient and robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence ,43(9):3079–3090.15A Supplementary MaterialsA.1 Evaluation MetricsMASE. Mean absolute scaled error is the standard metric for evaluating the accuracy of pointforecasts.MASE =1NN∑︁i=11HÍHh=1|yi,T+h−ˆyi,T+h|ÍT−st=1|yi,t+s−yi,t|MASE is scale-invariant and does not suffer from the limitations of other metrics, such as beingundefined when the target time series equals zero (Hyndman and Athanasopoulos, 2018). Wecompute the metric using the median (0.5 quantile) forecast produced by each model.wQL. Weighted quantile loss for a single quantile level qis defined aswQL[q]=2ÍNi=1ÍHh=1hq·max(yi,T+h−ˆyqi,T+h,0)+(1−q)·max(ˆyqi,T+h−yi,T+h,0)iÍNi=1ÍHh=1|yi,T+h|In our experiments, we report the mean wQL averaged over 9 quantile levels Q={0.1,0.2,...,0.9}.wQL =1|Q|∑︁q∈QwQL[q]A.2 ReproducibilityWe ran all experiments using AutoMLBenchmark (Gijsbers et al., 2022). We provide afork of AMLB that includes all scripts necessary to reproduce the results from our pa-per in the following GitHub repository https://github.com/shchur/automlbenchmark/tree/autogluon-timeseries-automl23/autogluon_timeseries_automl23 .A.3 Model ConfigurationWe trained the baseline models DeepAR, TFT, AutoARIMA, AutoETS, AutoTheta with the defaulthyperparameter configurations provided by the respective libraries. For DeepAR and TFT, thelastprediction_length time steps of each time series were reserved as a validation set. Bothmodels were trained for the full duration of 4 hours, saving the parameters and evaluating thevalidation loss at each epoch. The parameters achieving the lowest validation loss were then usedfor prediction. No HPO was performed for these two models, as AutoPyTorch already trains similardeep learning models with HPO.For AutoPyTorch, we used the reference implementation by the authors.3We set the tar-get metric to "mean_MASE_forecasting" ,budget_type="epochs" ,min_budget=5 ,max_budget=50 ,and resampling_strategy=HoldoutValTypes.time_series_hold_out_validation . We also settorch_num_threads to 16 (the number of vCPU cores).In our experiments, we used AG–TS v0.8.2, the latest release at the time of publication. Weused the "best_quality" presets and set eval_metric to either "MASE" or"mean_wQuantileLoss" ,depending on the experiment. All other parameters of the TimeSeriesPredictor were set totheir default values. The "best_quality" presets include the following models: AutoETS, Au-toARIMA, Theta (from StatsForecast), DeepAR, PatchTST, TFT (from GluonTS), DirectTabular,RecursiveTabular (wrappers around AutoGluon–Tabular and MLForecast), plus the baseline meth-ods Naive and SeasonalNaive. The non-default hyperparameters of the individual models used bythebest_quality presets are provided in Table 6.3https://github.com/dengdifan/Auto-PyTorch/blob/ecml22_apt_ts/examples/APT-TS/APT_task.py16The guiding principle for developing the presets for AG–TS can be summarized as “keep defaultswhenever possible, except the cases where the defaults are clearly suboptimal”. For example, wesetallowmean=True for AutoARIMA to allow this model to handle time series with non-zeromean. For deep learning models, we increase the batch size from 32 to 64 since larger batch sizestypically lead to faster convergence for all deep learning models. The context_length is capped ata minimum value because the default setting context_length=prediction_length can result inmodels that ignore most of the history if prediction_length is very short. For PatchTST, we setthecontext_length to the value used in the respective publication (Nie et al., 2023).The versions of frameworks used in our experiments are listed in Table 7.Table 6: Non-default hyperparameters that AutoGluon sets for the underlying models. The remainingparameters are all set to their defaults in the respective libraries. Models not listed here(Naive, SeasonalNaive, AutoETS, DirectTabular, Theta) have all their hyperparameters set tothe default values.Model Hyperparameter ValueAutoARIMA allowmean Trueapproximation TrueDeepAR batch_size 64context_length max(10, 2 * prediction_length)num_samples 250PatchTST batch_size 64context_length 96TFT batch_size 64context_length max(64, 2 * prediction_length)RecursiveTabular tabular_hyperparameters {"GBM", "NN_TORCH"}Table 7: Versions of the frameworks used during evaluation.Framework VersionAutoGluon 0.8.2AutoPyTorch 0.2.1GluonTS 0.13.2MLForecast 0.7.3StatsForecast 1.5.0Python 3.9PyTorch 1.13.1+cpu17Table 8: Statistics of the benchmark datasets used in our experimental evaluation. Frequency isrepresented by pandas offset aliases. Seasonality depends on the frequency, and is used toconfigure statistical models and compute the MASE metric.Dataset # series # time steps Prediction length Frequency SeasonalityCar Parts 2,674 104,286 12 M 12CIF 2016 72 6,244 12 M 12COVID 266 48,412 30 D 7Electricity Hourly 321 8,428,176 48 H 24Electricity Weekly 321 47,508 8 W 1FRED-MD 107 76,612 12 M 12Hospital 767 55,224 12 M 12KDD Cup 2018 270 2,929,404 48 H 24M1 Monthly 617 44,892 18 M 12M1 Quarterly 203 8,320 8 Q 4M1 Yearly 181 3,429 6 Y 1M3 Monthly 1,428 141,858 18 M 12M3 Other 174 11,933 8 Q 1M3 Quarterly 756 30,956 8 Q 4M3 Yearly 645 14,449 6 Y 1M4 Daily 4,227 9,964,658 14 D 7M4 Hourly 414 353,500 48 H 24M4 Monthly 48,000 10,382,411 18 M 12M4 Quarterly 24,000 2,214,108 8 Q 4M4 Weekly 359 366,912 13 W 1M4 Yearly 22,974 707,265 6 Y 1NN5 Daily 111 81,585 56 D 7NN5 Weekly 111 11,655 8 W 1Pedestrian Counts 66 3,129,178 48 H 24Tourism Monthly 366 100,496 24 M 12Tourism Quarterly 427 39,128 8 Q 4Tourism Yearly 518 10,685 4 Y 1Vehicle Trips 262 45,253 7 D 7Web Traffic Weekly 145,063 15,376,678 8 W 118Table 9: Point forecast accuracy, as measured by MASE (lower is better). For non-deterministic methods(DeepAR, TFT, AutoPyTorch, AutoGluon) we report the mean and standard deviation of thescores computed over 5 random seeds. "d.n.f." denotes cases where a method did not generatea forecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 1.127 1.118 1.133 1.208 1.052 0.749 (0.001) 0.751 (0.002) 0.746 (0.0) 0.747 (0.0)CIF 2016 1.289 1.069 0.898 1.006 0.945 1.278 (0.088) 1.372 (0.085) 1.023 (0.069) 1.073 (0.006)COVID 8.977 6.029 5.907 7.719 5.884 7.166 (0.334) 5.192 (0.211) 4.911 (0.086) 5.805 (0.0)Electricity Hourly 1.405 d.n.f. 1.465 d.n.f. d.n.f. 1.251 (0.006) 1.389 (0.025) 1.420 (0.123) 1.227 (0.003)Electricity Weekly 3.037 3.009 3.076 3.113 3.077 2.447 (0.211) 2.861 (0.122) 2.322 (0.277) 1.892 (0.0)FRED-MD 1.101 0.478 0.505 0.564 0.498 0.634 (0.038) 0.901 (0.086) 0.682 (0.058) 0.656 (0.0)Hospital 0.921 0.820 0.766 0.764 0.753 0.771 (0.008) 0.814 (0.012) 0.770 (0.003) 0.741 (0.001)KDD Cup 2018 0.975 d.n.f. 0.988 1.010 d.n.f. 0.841 (0.036) 0.844 (0.065) 0.764 (0.047) 0.709 (0.026)M1 Monthly 1.314 1.152 1.083 1.092 1.045 1.117 (0.029) 1.534 (0.063) 1.278 (0.115) 1.235 (0.001)M1 Quarterly 2.078 1.770 1.665 1.667 1.622 1.742 (0.028) 2.099 (0.108) 1.813 (0.056) 1.615 (0.0)M1 Yearly 4.894 3.870 3.950 3.659 3.769 3.674 (0.161) 4.318 (0.122) 3.407 (0.078) 3.371 (0.007)M3 Monthly 1.146 0.934 0.867 0.855 0.845 0.960 (0.017) 1.062 (0.04) 0.956 (0.083) 0.822 (0.0)M3 Other 3.089 2.245 1.801 2.009 1.769 2.061 (0.182) 1.926 (0.028) 1.871 (0.024) 1.837 (0.004)M3 Quarterly 1.425 1.419 1.121 1.119 1.096 1.198 (0.037) 1.176 (0.036) 1.180 (0.032) 1.057 (0.002)M3 Yearly 3.172 3.159 2.695 2.608 2.627 2.694 (0.096) 2.818 (0.019) 2.691 (0.026) 2.520 (0.002)M4 Daily 1.452 1.153 1.228 1.149 1.145 1.145 (0.026) 1.176 (0.018) 1.152 (0.009) 1.156 (0.0)M4 Hourly 1.193 1.029 1.609 2.456 1.157 1.484 (0.151) 3.391 (0.442) 1.345 (0.404) 0.807 (0.001)M4 Monthly 1.079 0.812 0.803 0.834 0.780 0.933 (0.01) 0.947 (0.005) 0.851 (0.025) 0.782 (0.0)M4 Quarterly 1.602 1.276 1.167 1.183 1.148 1.367 (0.171) 1.277 (0.015) 1.176 (0.022) 1.139 (0.0)M4 Weekly 2.777 2.355 2.548 2.608 2.375 2.418 (0.026) 2.625 (0.038) 2.369 (0.177) 2.035 (0.001)M4 Yearly 3.966 3.720 3.077 3.085 3.032 3.858 (0.694) 3.220 (0.097) 3.093 (0.041) 3.019 (0.001)NN5 Daily 1.011 0.935 0.870 0.878 0.859 0.812 (0.01) 0.789 (0.004) 0.807 (0.021) 0.761 (0.004)NN5 Weekly 1.063 0.998 0.980 0.963 0.977 0.915 (0.085) 0.884 (0.012) 0.865 (0.025) 0.860 (0.0)Pedestrian Counts 0.369 d.n.f. 0.553 d.n.f. d.n.f. 0.309 (0.005) 0.373 (0.01) 0.354 (0.024) 0.312 (0.009)Tourism Monthly 1.631 1.585 1.529 1.666 1.469 1.461 (0.025) 1.719 (0.08) 1.495 (0.009) 1.442 (0.0)Tourism Quarterly 1.699 1.655 1.578 1.648 1.539 1.599 (0.062) 1.830 (0.047) 1.647 (0.034) 1.537 (0.002)Tourism Yearly 3.552 4.044 3.183 2.992 3.231 3.476 (0.165) 2.916 (0.197) 3.004 (0.053) 2.946 (0.007)Vehicle Trips 1.302 1.427 1.301 1.284 1.203 1.162 (0.016) 1.227 (0.02) 1.162 (0.019) 1.113 (0.0)Web Traffic Weekly 1.066 1.189 1.207 1.108 1.068 N/A 0.973 (0.022) 0.962 (0.01) 0.938 (0.0)19Table 10: Probabilistic forecast accuracy, as measured by wQL (lower is better). For non-deterministicmethods (DeepAR, TFT, AutoGluon) we report the mean and standard deviation of the scorescomputed over 5 random seeds. "d.n.f." denotes cases where a method did not generate aforecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoGluonCar Parts 1.717 1.589 1.338 1.367 1.324 0.963 (0.009) 0.878 (0.004) 0.923 (0.0)CIF 2016 0.031 0.017 0.039 0.027 0.028 0.114 (0.024) 0.010 (0.002) 0.019 (0.0)COVID 0.140 0.030 0.046 0.094 0.046 0.072 (0.02) 0.031 (0.003) 0.030 (0.0)Electricity Hourly 0.108 d.n.f. 0.100 d.n.f. d.n.f. 0.081 (0.002) 0.097 (0.001) 0.076 (0.0)Electricity Weekly 0.141 0.138 0.144 0.146 0.141 0.123 (0.041) 0.118 (0.011) 0.088 (0.0)FRED-MD 0.104 0.056 0.050 0.057 0.054 0.054 (0.021) 0.114 (0.011) 0.056 (0.0)Hospital 0.062 0.058 0.053 0.055 0.053 0.053 (0.001) 0.054 (0.001) 0.051 (0.0)KDD Cup 2018 0.489 d.n.f. 0.550 0.553 d.n.f. 0.363 (0.014) 0.488 (0.054) 0.323 (0.014)M1 Monthly 0.153 0.146 0.163 0.159 0.152 0.136 (0.008) 0.224 (0.016) 0.135 (0.0)M1 Quarterly 0.119 0.088 0.081 0.082 0.083 0.084 (0.003) 0.093 (0.006) 0.090 (0.0)M1 Yearly 0.184 0.160 0.139 0.137 0.142 0.142 (0.029) 0.127 (0.004) 0.134 (0.001)M3 Monthly 0.124 0.102 0.093 0.095 0.092 0.098 (0.001) 0.109 (0.003) 0.089 (0.0)M3 Other 0.047 0.035 0.032 0.035 0.031 0.036 (0.002) 0.033 (0.001) 0.031 (0.0)M3 Quarterly 0.083 0.079 0.069 0.070 0.068 0.073 (0.001) 0.071 (0.001) 0.065 (0.0)M3 Yearly 0.141 0.162 0.129 0.128 0.128 0.117 (0.002) 0.133 (0.001) 0.114 (0.0)M4 Daily 0.030 0.023 0.025 0.023 0.023 0.023 (0.0) 0.023 (0.0) 0.022 (0.0)M4 Hourly 0.039 0.036 0.070 0.041 0.037 0.065 (0.03) 0.038 (0.002) 0.030 (0.001)M4 Monthly 0.109 0.085 0.085 0.088 0.082 0.092 (0.003) 0.089 (0.001) 0.081 (0.0)M4 Quarterly 0.099 0.082 0.079 0.079 0.076 0.084 (0.005) 0.083 (0.001) 0.075 (0.0)M4 Weekly 0.073 0.050 0.052 0.053 0.050 0.046 (0.001) 0.049 (0.001) 0.041 (0.0)M4 Yearly 0.138 0.130 0.111 0.115 0.109 0.124 (0.006) 0.116 (0.004) 0.104 (0.0)NN5 Daily 0.292 0.169 0.162 0.188 0.164 0.148 (0.002) 0.145 (0.001) 0.140 (0.0)NN5 Weekly 0.142 0.090 0.088 0.090 0.089 0.084 (0.007) 0.085 (0.001) 0.078 (0.0)Pedestrian Counts 0.675 d.n.f. 0.764 d.n.f. d.n.f. 0.230 (0.006) 0.261 (0.008) 0.238 (0.013)Tourism Monthly 0.088 0.095 0.101 0.091 0.085 0.086 (0.005) 0.103 (0.01) 0.083 (0.0)Tourism Quarterly 0.099 0.098 0.070 0.061 0.070 0.068 (0.002) 0.083 (0.005) 0.072 (0.0)Tourism Yearly 0.170 0.156 0.157 0.176 0.155 0.141 (0.016) 0.102 (0.006) 0.152 (0.0)Vehicle Trips 0.112 0.100 0.115 0.120 0.103 0.090 (0.002) 0.099 (0.005) 0.087 (0.0)Web Traffic Weekly 0.936 0.475 8·10130.503 0.474 N/A 0.223 (0.011) 0.225 (0.0)20Table 11: Average run time of each method (in minutes).Dataset SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 0.1 2.4 0.6 0.7 3.3 6.9 9.2 240.3 17.4CIF 2016 0.1 0.4 0.5 0.6 1.3 4.1 6.2 240.2 16.7COVID 0.1 1.4 0.5 0.7 2.3 7.9 8.8 240.4 29.3Electricity Hourly 0.2 >360 21.6 >360 >360 10.4 19.5 240.4 61.2Electricity Weekly 0.2 0.3 0.4 0.5 1.0 3.1 6.6 240.2 14.9FRED-MD 0.1 2.4 0.7 0.6 3.4 6.8 5.5 240.2 16.8Hospital 0.1 0.9 0.7 0.7 2.1 4.6 7.6 240.2 17.4KDD Cup 2018 0.1 >360 16.3 22.8 >360 12.4 11.9 240.3 56.0M1 Monthly 0.1 1.5 0.8 0.7 2.7 5.5 6.2 240.2 21.6M1 Quarterly 0.1 0.3 0.5 0.7 1.3 5.9 5.4 240.2 15.6M1 Yearly 0.1 0.3 0.4 0.4 0.9 4.2 5.2 240.2 12.9M3 Monthly 0.1 4.0 1.0 0.8 5.8 5.1 5.9 240.3 24.2M3 Other 0.1 0.3 0.4 0.4 0.9 5.0 6.0 240.2 13.6M3 Quarterly 0.1 0.5 0.6 0.7 1.6 4.6 6.0 240.3 15.7M3 Yearly 0.1 0.4 0.5 0.4 1.0 5.9 5.4 240.2 12.7M4 Daily 0.2 28.5 33.0 25.3 82.3 6.8 8.4 240.3 68.7M4 Hourly 0.1 84.9 1.8 0.8 89.5 9.2 10.9 240.2 51.2M4 Monthly 0.3 296.0 37.6 7.7 340.3 4.9 7.9 242.0 112.1M4 Quarterly 0.2 15.7 6.2 1.6 23.2 4.7 7.6 240.9 62.3M4 Weekly 0.1 0.6 0.5 1.3 2.2 5.6 7.8 240.3 20.8M4 Yearly 0.2 4.3 0.8 0.7 5.6 4.2 6.1 240.8 35.6NN5 Daily 0.1 2.5 0.5 0.6 3.3 7.3 10.9 240.3 37.4NN5 Weekly 0.1 0.3 0.4 0.4 1.0 3.6 6.4 240.2 13.7Pedestrian Counts 0.1 >360 4.9 >360 >360 13.5 16.7 240.7 56.4Tourism Monthly 0.1 10.2 0.8 0.7 13.1 4.4 7.6 240.2 26.0Tourism Quarterly 0.1 0.9 0.6 0.7 1.8 3.6 6.3 240.2 14.6Tourism Yearly 0.1 0.3 0.4 0.4 1.0 3.5 5.8 240.3 12.4Vehicle Trips 0.1 1.1 0.6 0.7 2.2 5.1 7.3 240.2 16.0Web Traffic Weekly 0.2 42.3 3.7 6.2 52.8 N/A 8.3 260.5 106.021
JA29SJ3Ma8
XHIY3cQ8Tew
automl.cc/AutoML/2023/ABCD_Track
2023
AutoGluon–TimeSeries: AutoML for Probabilistic Time Series Forecasting
["Oleksandr Shchur", "Ali Caner Turkmen", "Nick Erickson", "Huibin Shen", "Alexander Shirkov", "Tony Hu", "Bernie Wang"]
We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic time series forecasting. Focused on ease of use and robustness, AutoGluon–TimeSeries enables users to generate accurate point and quantile forecasts with just 3 lines of Python code. Built on the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles of diverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning based forecasting approaches, and ensembling techniques. In our evaluation on 29 benchmark datasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforming a range of forecasting methods in terms of both point and quantile forecast accuracy, and often even improving upon the best-in-hindsight combination of prior methods.
["AutoML", "forecasting", "time series", "probabilistic forecasting"]
AutoGluon–TimeSeries:AutoML for Probabilistic Time Series ForecastingOleksandr Shchur1Caner Turkmen1Nick Erickson1Huibin Shen2Alexander Shirkov1Tony Hu1Yuyang Wang21Amazon Web Services2AWS AI LabsAbstract We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic timeseries forecasting.1Focused on ease of use and robustness, AutoGluon–TimeSeries enablesusers to generate accurate point and quantile forecasts with just 3 lines of Python code. Builton the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles ofdiverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning basedforecasting approaches, and ensembling techniques. In our evaluation on 29 benchmarkdatasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforminga range of forecasting methods in terms of both point and quantile forecast accuracy, andoften even improving upon the best-in-hindsight combination of prior methods.1 IntroductionTime series (TS) forecasting is a fundamental statistical problem with applications in diversedomains such as inventory planning (Syntetos et al., 2009), smart grids (Hong et al., 2020), andepidemiology (Nikolopoulos et al., 2021). Decades of research led to development of variousforecasting approaches, from simple statistical models (Hyndman and Athanasopoulos, 2018) toexpressive deep-learning-based architectures (Benidis et al., 2022). Despite the availability of variousforecasting approaches, practitioners often struggle with selecting the most appropriate methodand adhering to best practices when implementing and evaluating forecasting pipelines.AutoML aims to mitigate these challenges by providing tools that enable practitioners to developaccurate and efficient predictive models without extensive domain knowledge. While traditionalAutoML methods have focused primarily on classification and regression tasks for tabular data(Thornton et al., 2013; Feurer et al., 2015; Olson and Moore, 2016; Erickson et al., 2020; LeDell andPoirier, 2020; Zimmer et al., 2021), automated time series forecasting has received comparativelyless attention, with only a few open-source AutoML forecasting frameworks having been proposed(Deng et al., 2022; Catlin, 2022). Furthermore, existing automated forecasting frameworks tend togenerate point forecasts without considering uncertainty, which is a crucial factor in many practicalapplications (Gneiting and Katzfuss, 2014).To close this gap, we introduce AutoGluon–TimeSeries (AG–TS), an open-source AutoML frame-work for probabilistic time series forecasting written in Python. AG–TS can generate both pointand probabilistic forecasts for collections of univariate time series. Together with support for staticand time-varying covariates, this makes AG–TS applicable to most real-world forecasting tasks.As part of the AutoGluon framework (Erickson et al., 2020; Shi et al., 2021), AG–TS adheres tothe principles of ease of use and robustness, empowering users with limited expertise in the targetdomain to generate highly accurate predictions with minimal coding effort. The architecture is1https://github.com/autogluon/autogluonAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Point forecast (left) and quantile forecast (right) for a univariate time series.capable of handling failures of individual models when necessary, producing a valid result as longas any single model was trained successfully.We evaluate the performance of AG–TS against other established forecasting methods andAutoML systems using 29 publicly available benchmark datasets. The results demonstrate AG–TS’s strong performance, outperforming various competing approaches in terms of both pointand probabilistic forecast accuracy. This highlights the potential of AG–TS as a valuable tool forpractitioners and researchers seeking an automated and versatile solution for time series forecasting.2 Probabilistic Time Series ForecastingThe probabilistic time series forecasting problem can be formally stated as follows. The dataD={yi,1:Ti}Ni=1is a collection of Nunivariate time series, where yi,1:Ti=(yi,1,...,yi,T i),yi,tis thevalue of the i-th time series at time t, andTiis the length of the i-th time series.2For example,yi,tmay correspond to the number of units of product isold on day t. The goal of time seriesforecasting is to predict the future Hvalues for each time series in D. The parameter His knownasprediction length orforecast horizon .Each time series yi,1:Tmay additionally be associated with covariates Xi,1:T+H. These includeboth static covariates (e.g., location of the store, product ID) and time-varying covariates . Thetime-varying covariates may, in turn, be known in the future (e.g., day of the week, promotions) oronly known in the past (e.g., weather, sales of other products).In the most general form, the goal of probabilistic forecasting is to model the conditionaldistribution of the future time series values yi,T+1:T+Hgiven the past values yi,1:Tand the relatedcovariates Xi,1:T+Hp(yi,T+1:T+H|yi,1:T,Xi,1:T+H).In practice, we are rarely interested in the full predictive distribution and rather represent therange of possible outcomes with quantile forecasts ˆyqi,T+1:T+Hfor chosen quantile levels q∈(0,1).The quantile forecast implies that the future time series value yi,T+his predicted to exceed ˆyqi,T+hwith probability q(Wen et al., 2017; Lim et al., 2021).If the uncertainty is of no interest, we can instead report a point forecast of the future timeseries values. For example, we can summarize the prediction using the conditional meanˆyi,T+1:T+H=Ep[yi,T+1:T+H|yi,1:T,Xi,1:T+H].Figure 1 demonstrates the difference between a point forecast and a quantile forecast. Finally, notethat here we consider the problem of forecasting multiple univariate time series, also known aspanel data, which is different from multivariate forecasting (Benidis et al., 2022).2To reduce clutter in notation, we assume that all time series have the same length T(even though AG–TS supportsthe case when time series have different lengths).23 AutoGluon–TimeSeriesAutoGluon–TimeSeries enables users to generate probabilistic time series forecasts in a few linesof code, as shown by the following minimal example.1from autogluon . timeseries import TimeSeriesDataFrame , TimeSeriesPredictor23train_data = TimeSeriesDataFrame . from_path (" train . csv ")4predictor = TimeSeriesPredictor ( prediction_length =30) . fit ( train_data )5predictions = predictor . predict ( train_data ) # forecast next 30 time stepsLoading the data. ATimeSeriesDataFrame object stores a collection of univariate time series andprovides utilities such as loading data from disk and train-test splitting. Internally, time series datais represented as a pandas.DataFrame (pandas development team, 2020) in long format (Table 1),but loaders are also available for other formats. Besides the target time series that need to beforecast, TimeSeriesDataFrame can also store the static and time-varying covariates.Table 1: Collection of univariate time series stored as a TimeSeriesDataFrame . Each row containsunique ID of the time series, timestamp, and the value of the target time series.item_id timestamp targetT1 2020-03-02 23T1 2020-03-03 43·········T999 2020-08-29 15T999 2020-08-31 27Defining the task. Users can specify the forecasting task by creating a TimeSeriesPredictorobject. Task definition includes information such as prediction length , list of quantile levels tobe predicted, and the evaluation metric . The evaluation metric should be chosen based on thedownstream application. For example, mean weighted quantile loss (wQL) measures the accuracy ofquantile forecasts, and mean absolute scaled error (MASE) reports the accuracy of the point forecastrelative to a naive baseline. When creating the predictor, users can also specify what time-varyingcovariates are known in the future—the remainder will be treated as past-only covariates.Fitting the predictor. Inside the fit() method, the predictor preprocesses the data, fits andevaluates various models using cross-validation, optionally performs hyperparameter optimization(HPO) on selected models, and trains an ensemble of the individual forecasting models. By default,AG–TS provides user-friendly presets users can choose from to manage the training time–accuracytradeoff. Advanced users can also explicitly specify the models to use and their hyperparameters,or specify search spaces in which optimal hyperparameters will be searched.Making predictions. After the predictor has been fit, the predict() method can be used to generatepredictions on new data—including time series that haven’t been seen during training. Like theinput data, the predictions are stored in a long-format data frame, where the columns contain themean (expected value) and quantile forecasts at the desired quantile levels (Table 2).Documentation. We provide various additional resources on the official website auto.gluon.ai.These include installation instructions, tutorials, and a cheatsheet summarizing the main features.3.1 Design ConsiderationsAG–TS was launched as a part of the AutoGluon suite (Erickson et al., 2020) in v0.5, building onthe foundation of AutoGluon and borrowing some design elements from other forecasting librarieslike GluonTS (Alexandrov et al., 2020). Since then, AG–TS has evolved into a full solution for timeseries forecasting. Below, we highlight some of AG–TS’s key design principles.3Table 2: Mean and quantile forecasts generated by a TimeSeriesPredictor . The forecasts include thenext prediction_length many time steps of each time series in the dataset.item_id timestamp mean 0.1 0.5 0.9T1 2020-09-01 17 10 16 23T1 2020-09-02 25 15 23 31··················T999 2020-09-29 33 21 33 36T999 2020-09-30 30 24 28 34Ensembles over HPO. AG–TS follows the AutoGluon philosophy, relying on ensembling techniquesinstead of HPO or neural architecture search. The library features a broad selection of modelswhose probabilistic forecasts are combined in an ensemble selection step (Caruana et al., 2004).AG–TS favors broadening the portfolio of forecasters over exploring the hyperparameter space ofany particular model. While AG–TS does support HPO techniques, HPO is excluded from mostpreset configurations to reduce training time and minimize overfitting on the validation data.Presets and default hyperparameters. In order to provide defaults that work well out of the box forusers that are not familiar with forecasting, AG–TS includes various presets —high-level configura-tion options that allow users to trade off between fast training and higher accuracy. AG–TS followsthe convention-over-configuration principle: all models feature default configurations of hyperpa-rameters that are expected to work well given the selected preset. At the same time, advanced usershave an option to manually configure individual models and use the TimeSeriesPredictor as aunified API for training, evaluating and combining various forecasting models (see documentationfor details).Model selection. Time series forecasting introduces unique challenges in model validation andselection. Importantly, as the main aim of the model is to generalize into the future , special carehas to be taken to define validation sets that are held out across time . The AG–TS API is designedwith this consideration. If the user does not explicitly specify a validation set, the library holds thewindow with last prediction_length time steps of each time series as a validation set. Optionally,multiple windows can be used to perform so-called backtesting .3.2 Forecasting ModelsThere are two families of approaches to forecasting in large panels of time series. The first approachis to fit local classical parametric statistical models to each individual time series. A second approachis built on expressive machine-learning-based approaches that are fit globally on all time series atonce. AG–TS features both approaches, incorporating forecasting models from both families andcombining them in an ensemble.Local models. This category contains conventional methods that capture simple patterns liketrend and seasonality. Examples include ARIMA (Box et al., 1970), Theta (Assimakopoulos andNikolopoulos, 2000) and ETS(Hyndman et al., 2008), as well as simple baselines like Seasonal Naive(Hyndman and Athanasopoulos, 2018). AG–TS relies on implementations of these provided byStatsForecast (Garza et al., 2022).The defining characteristic of local models is that a separate model is fit to each individualtime series in the dataset (Januschowski et al., 2020). This means that local models need to be re-fitwhen making predictions for new time series not seen during training. To mitigate this limitation,AG–TS caches the model predictions and parallelizes their fitting across CPU cores using Joblib(Joblib Development Team, 2020).4Global models. Unlike local models, a single global model is fitted to the entire dataset and usedto make predictions for all time series. Global models used by AG–TS can be subdivided intotwo categories: deep learning and tabular models. Deep-learning models such as DeepAR (Salinaset al., 2020), PatchTST (Nie et al., 2023), and Temporal Fusion Transformer (Lim et al., 2021) useneural networks to generate probabilistic forecasts for future data. AG–TS uses PyTorch-baseddeep learning models from GluonTS (Alexandrov et al., 2020). Tabular models like LightGBM (Keet al., 2017) operate by first converting the time series forecasting task into a tabular regressionproblem. This can be done either recursively —by predicting future time series values one at atime—or by directly forecasting all future values simultaneously (Januschowski et al., 2022). AG–TSrelies on regression models provided by AutoGluon–Tabular and uses MLForecast (Nixtla, 2023)for converting them into tabular forecasters.Global models typically provide faster inference compared to local models, since there isno need for re-training at prediction time. This, however, comes at the cost of longer trainingtimes since more parameters need to be estimated. Global models also naturally handle varioustypes of covariates and utilize information present across different time series, which is known ascross-learning (Semenoglou et al., 2021).Ensembling. After AG–TS finishes sequentially fitting the individual models, they are combinedusing 100 steps of the forward selection algorithm (Caruana et al., 2004). The output of the ensembleis a convex combination of the model predictions:ˆyensemblei,T+1:T+H=M∑︁m=1wm·ˆy(m)i,T+1:T+Hsubject towm≥0,M∑︁m=1wm=1,where ˆy(m)i,T+1:T+Hare either point or quantile forecasts generated by each of the Mtrained models.Note that in case of probabilistic forecasting, the ensemble computes a weighted average of thequantile forecasts of the individual models—method known as Vincentization (Ratcliff, 1979).The ensemble weights wmare tuned to optimize the chosen evaluation metric (e.g., wQL,MASE) on the out-of-fold predictions generated using time series cross-validation (Hyndman andAthanasopoulos, 2018). The main advantages of the forward selection algorithm are its simplicity,compatibility with arbitrary evaluation metrics, and the sparsity of the final ensemble.4 Related workTime series forecasting is a challenging task, and the idea of automated forecasting has long intriguedstatistics and ML researchers. An early influential work on automated forecasting was the Rpackageforecast (Hyndman and Khandakar, 2008) that introduced the AutoETS and AutoARIMA models.These models automatically tune their parameters (e.g., trend, seasonality) for each individual timeseries using an in-sample information criterion.The following decade saw the growing focus on deep learning models for time series (Benidiset al., 2022; Wen et al., 2017; Salinas et al., 2020; Lim et al., 2021; Oreshkin et al., 2020). Several workshave explored how such neural-network-based models can be combined with AutoML techniques togenerate automated forecasting solutions (Van Kuppevelt et al., 2020; Shah et al., 2021; Javeri et al.,2021). Another line of research focused on optimizing the entire forecasting pipeline—includingdata preprocessing and feature engineering—not just hyperparameter tuning for individual models(Dahl, 2020; Kurian et al., 2021; da Silva et al., 2022). A recent survey by Meisenbacher et al. (2022)provides an overview of such automated pipelines.Even though AutoML for forecasting is becoming an active research topic, few of the recentdevelopments have found their way from academic papers to software packages. Available open-source AutoML forecasting libraries include AutoPyTorch–Forecasting (Deng et al., 2022), AutoTS(Catlin, 2022) and PyCaret (Ali, 2020). In contrast to these frameworks, AG–TS supports probabilisticforecasting and focuses on ease of use, allowing users to generate forecasts in a few lines of code.55 Experiments5.1 SetupThe goal of our experiments is to evaluate the point and probabilistic forecast accuracy of AG–TS.As baselines, we use various statistical and ML-based forecasting methods.Baseline methods. AutoARIMA ,AutoETS , and AutoTheta are established statistical forecastingmodels that automatically tune model parameters for each time series individually based on aninformation criterion (Hyndman et al., 2008). This means, such models do not require a validationset and use in-sample statistics for model tuning. StatEnsemble is defined by taking the median ofthe predictions of the three statistical models. Such statistical ensembles, despite their simplicity,have been shown to achieve competitive results in forecasting competitions (Makridakis et al.,2018). We use Python implementations of all these methods provided by the StatsForecast library(Garza et al., 2022). We additionally use Seasonal Naive as a sanity-check baseline that all othermethods are compared against (Hyndman and Athanasopoulos, 2018).For ML-based methods, we include two established deep learning forecasting models, DeepAR(Salinas et al., 2020) and Temporal Fusion Transformer (TFT) (Lim et al., 2021). We use the PyTorchimplementations of these models provided by GluonTS (Alexandrov et al., 2020). Finally, we includethe AutoML forecasting framework AutoPyTorch–Forecasting (Deng et al., 2022) to our comparison.AutoPyTorch builds deep learning forecasting models by combining neural architecture search (e.g.,by trying various encoder modules) and hyperparameter optimization (e.g., by tuning the learningrate). The search process is powered by a combination of Bayesian and multi-fidelity optimization.Similar to AutoGluon, the models are combined using ensemble selection (Caruana et al., 2004).Datasets. In our evaluation we use 29 publicly available forecasting benchmark datasets providedvia GluonTS. These include datasets from the Monash Forecasting Repository (Godahewa et al.,2021), such as the M1, M3 and M4 competition data (Makridakis and Hibon, 2000; Makridakis et al.,2018). We selected the datasets from the Monash Repository that contain more than a single timeseries and fewer than 15M total time steps. Our selection of datasets covers various scenarios thatcan be encountered in practice—from small datasets (M1 and M3), to datasets with a few long timeseries (Electricity, Pedestrian Counts) and large collections of medium-sized time series (M4). Acomprehensive list of dataset statistics are provided in Table 8 in the appendix.Configuration. We train the TimeSeriesPredictor from AG–TS with best_quality presets, asthese are designed to produce the most accurate forecasts, and set the time_limit to 4 hours. Notethat the presets were fixed a priori and not optimized using the benchmark datasets. DeepAR andTFT are also trained for up to 4 hours with early stopping on validation loss with patience set to200. For these models, the model checkpoint achieving the best validation loss is used to generatethe test predictions. The time limit for AutoPyTorch is similarly set to 4 hours. We set no time limitfor the remaining statistical models, as they do not support such functionality. In case the runtimeof a single experiment exceeds 6 hours, the job is interrupted and the result is marked as failure.More details about the configuration are available in Appendix A.3.All models are trained using AWS m6i.4xlarge cloud instances (16 vCPU cores, 64 GB RAM). Weuse CPU instances to fairly evaluate the CPU-only baselines, though AG–TS additionally supportsGPU training. Each run is repeated 5 times using different random seeds for non-deterministicmodels. We run all experiments using AutoMLBenchmark (Gijsbers et al., 2022). In the supplement,we provide full configuration details and the scripts for reproducing all experiments.5.2 Forecasting AccuracyWe measure the accuracy of the point forecasts by reporting the mean absolute scaled error(MASE) of all forecasting methods on all benchmark datasets. AG–TS and AutoPyTorch are trained6Table 3: Point forecast accuracy comparison of baseline methods with AutoGluon (based on the MASEmetric) on 29 datasets. Listed are the number datasets where each method produced: lowererror than AutoGluon (Wins), higher error (Losses), error within 0.001 (Ties), error duringprediction (Failures), or the lowest error among all methods (Champion). Average rank andaverage error are computed using the datasets where no method failed. We rescale the errorsfor each dataset between [0,1]to ensure that averaging is meaningful. The final columnreports the win rate versus the Seasonal Naive baseline. Individual results are given in Table 9.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (MASE) - - - 0 19 2.08 0.073 100.0%StatEnsemble 6 20 0 3 3 3.12 0.238 82.8 %AutoPyTorch (MASE) 4 25 0 0 2 4.12 0.257 93.1%AutoETS 4 25 0 0 1 4.64 0.374 75.9 %AutoTheta 4 23 0 2 0 4.92 0.427 72.4 %DeepAR 4 24 0 1 2 5.08 0.434 93.1 %AutoARIMA 4 22 0 3 1 5.92 0.612 79.3 %TFT 2 27 0 0 1 6.12 0.635 75.9 %Table 4: Probabilistic forecast accuracy comparison of each baseline method with AutoGluon (based onthe wQL metric) on 29 datasets. The columns are defined as in Table 3. Results for individualmodels and datasets are given in Table 10.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (wQL) - - - 0 19 1.80 0.086 100.0%StatEnsemble 3 23 0 3 0 3.36 0.330 86.2%DeepAR 5 23 0 1 1 4.08 0.455 89.7%TFT 5 24 0 0 5 4.24 0.487 89.7%AutoETS 3 26 0 0 2 4.40 0.489 69.0%AutoTheta 2 25 0 2 1 5.00 0.545 69.0%AutoARIMA 4 22 0 3 1 5.12 0.641 82.8%to optimize the MASE metric, while all other models are trained using their normal trainingprocedure. We report the aggregate statistics in Table 3, and provide the full results for individualmodels and datasets in Table 9 in the appendix.We measure the accuracy of the probabilistic (quantile) forecasts by reporting the meanweighted quantile loss (wQL) averaged over 9 quantile levels q∈{0.1,0.2,...,0.9}. AG–TS isconfigured to optimize the wQL metric. We exclude AutoPyTorch from this comparison since thisframework does not support probabilistic forecasting. We report the aggregate statistics in Table 4,and provide the full results for individual models and datasets in Table 10 in the appendix.Some of the frameworks failed to generate forecasts on certain datasets. AutoARIMA, AutoThetaand StatEnsemble did not finish training on some datasets (Electricity–Hourly, KDD Cup 2018,and Pedestrian Counts) within 6 hours. This is caused by the poor scaling of these models to verylong time series. DeepAR model fails on one dataset (Web Traffic Weekly) due to numerical errorsencountered during training.Discussion. The results demonstrate that AG–TS outperforms all other frameworks, achieving thebest average rank and rescaled error for both point and probabilistic forecasts, and even beatingthe best-in-hindsight competing method on 19 out of 29 datasets.StatEnsemble places second after AG–TS. The statistical ensemble performs especially well onsmall datasets such as M1 and M3. This demonstrates that in the low-data regime simple approaches,7Figure 2: Total runtime of each framework across all datasets. AutoGluon always completes trainingand prediction under the time limit and achieves a mean runtime of 33 minutes. AutoPyTorchis always trained for the full 4 hour time limit. Statistical models train faster in most cases,but may take an extremely long time to train on datasets with long time series. The runtimesfor individual models and datasets are provided in Table 11.like ensembling by taking the median, may perform better than the learned ensemble selectionstrategy employed by both AutoML frameworks.AutoPyTorch achieves similar performance to StatEnsemble in point forecasting across mostperformance indicators. Interestingly, AG–TS tends to outperform AutoPyTorch on larger datasetslike M4. This means that AG–TS’s strategy of training various light-weight models performs wellin this setting under the limited time budget. Also note, configuring AutoPyTorch requires morecode and domain knowledge, compared to the 3 lines of code necessary to reproduce the aboveresults by AG–TS.Deep learning models DeepAR and TFT perform well in terms of probabilistic forecasting, butfall behind simple statistical approaches in point forecasts. This makes sense, since the objectivefunctions optimized by these deep learning models are designed for probabilistic forecasting.5.3 Runtime ComparisonHigh accuracy is not the only important property of an AutoML system—the ability to generatepredictions in a reasonable amount of time is often necessary in practice. To evaluate the efficiency ofAG–TS, we compare its runtime with other frameworks. We visualize the runtime of each frameworkacross all datasets in Figure 2. Note that here we compare the total runtime defined as the sumof training and prediction times. This reflects the typical forecasting workflow in practice, wherethe forecast is generated once for each time series. Moreover, it’s hard to distinguish between thetraining and prediction time for local models, where a new model is trained for each new time series.AG–TS completes training and prediction under the 4-hour time limit for all 29 datasets, andachieves mean runtime of 33 minutes. While statistical models are faster on average, they can beextremely slow to train on datasets consisting of long time series. For instance, the runtimes ofAutoARIMA, AutoTheta and StatEnsemble exceed 6 hours for 3 datasets with long time series. Thedeep learning models DeepAR and TFT have higher median runtime compared to the statisticalmodels, but never reach the 4 hour time limit due to early stopping. Finally, AutoPyTorch alwaysconsumes the entire 4 hour time budget due to its design.To summarize, AG–TS is able to produce accurate forecasts under mild time budgets. While, onaverage, AG–TS takes more time than the individual models, it produces more accurate forecastsand avoids the extremely long runtimes sometimes exhibited by local models. The results alsodemonstrate that limited training time is better spent training and ensembling many diverse models(as done by AG–TS), rather than hyperparameter tuning a restricted set of models (as done byAutoPyTorch).8Table 5: Ablation study. We compare the point forecast accuracy of AutoGluon, where certain compo-nent models are removed, ensembling is disabled, or the time limit is reduced. All versionsexcept AutoGluon-1h and AutoGluon-10m are trained for 4 hours. The columns are definedand the scores are computed as in Table 3.Framework Champion Average rank Average rescaled errorAutoGluon-1h 19 2.04 0.070AutoGluon-4h 19 2.08 0.073NoStatModels 16 2.12 0.094NoTabularModels 15 2.12 0.085NoDeepModels 15 2.28 0.124AutoGluon-10m 14 2.50 0.099NoEnsemble 7 3.52 0.1775.4 AblationsFinally, we perform ablations to understand the effect of different components on the final perfor-mance. We compare the point forecast accuracy of the TimeSeriesPredictor trained for 4 hourswith MASE evalauation metric (Section 5.2) against several variations with certain disabled com-ponents. First, we exclude some base models from the presets: statistical models ( NoStatModels ),deep learning models ( NoDeepModels ), and tabular models ( NoTabularModels ). We also considerreducing the time limit to 1 hour ( AutoGluon-1h ) or 10 minutes ( AutoGluon-10m ), as well disablingthe final ensembling step ( NoEnsemble ). In the latter case, AG–TS predicts using the model withthe best validation score. The rest of the setup is identical to Section 5.2.Table 5 shows the metrics for the different model variations, each compared to the baselinesfrom Section 5.2. AutoGluon-4h and AutoGluon-1h produce nearly identical results. This isnot surprising, as the 4-hour version finishes training under 1 hour for most datasets (Figure 2).Interestingly, AutoGluon achieves strong results even with a 10-minute time limit, achieving thebest average rank and outperforming the best-in-hindsight model on 14 out of 29 datasets.Removing the ensembling step has the most detrimental effect on the overall accuracy. Thishighlights the importance of ensembling, confirming the findings of other works (Makridakis et al.,2018; Borchert et al., 2022). The ablations also show that all 3 classes of models used by AutoGluonare important for the overall performance, deep learning models being the most critical component.6 Future WorkOur experiments demonstrate the strong forecasting accuracy achieved by AG–TS. Despite theseencouraging initial results, we aim to continue developing the library, adding new functionalityand further boost the forecasting performance. This includes incorporating the various ideas in thespace of AutoML for forecasting (Meisenbacher et al., 2022), with focus on the following directions.Ensembling. Advanced ensembling strategies, such as stacking (Ting and Witten, 1997), lie at thecore of modern high-performing AutoML systems (Erickson et al., 2020). How to best generalizethese techniques to probabilistic forecasting is an active, but still open research question (Gastingeret al., 2021; Wang et al., 2022).Calibration. Many practical tasks require guarantees on the uncertainty estimates associated withthe forecasts. Conformal prediction methods (Stankeviciute et al., 2021; Xu and Xie, 2021) provideone way to obtain such guarantees, and we plan to incorporate them into AG–TS in the future.New problem types. AG–TS supports the most common types of forecasting tasks, such as proba-bilistic forecasting or handling covariates. However, there are several settings that are currently (as9of v0.8) not supported. These include so-called cold-start forecasting (where little historic data isavailable) and generating forecast explanations (Rojat et al., 2021). Another interesting potentialapplication for AG–TS is assisting judgemental forecasting. In this context, AG–TS could serve as a“tool” queried by a large language model (LLM) (Schick et al., 2023) to generate qualitative forecasts.More generally, combinations of LLM with AutoML frameworks are an exciting direction for futurework (Tornede et al., 2023).Scalability. In our experiments we consider datasets with up to ≈107time steps across all time series.Modern applications, however, sometimes require operating on even larger scales. This wouldrequire improving efficiency of existing models and developing new efficient AutoML techniques.7 ConclusionsIn this work, we introduced AutoGluon–TimeSeries, a powerful and user-friendly open-sourceAutoML library for probabilistic time series forecasting. By combining statistical models and deeplearning forecasting approaches with ensembling techniques, AutoGluon–TimeSeries is able toachieve strong empirical results on a range of benchmark datasets. With the ability to generateaccurate point and quantile forecasts with just 3 lines of Python code, this framework is poised tomake time series forecasting more accessible and efficient for a wide range of users.8 Broader Impact StatementAutoGluon–TimeSeries enables users to generate accurate forecasts in a few lines of code. Thisdemocratizes machine learning, lowering the barrier to entry to forecasting for non-experts. Atthe same time, AutoGluon–TimeSeries can be used by experienced users to design highly accurateforecasting pipelines. More accurate forecasts can directly translate to real-world impact in variousdomains. For example, forecasting renewable energy generation is a crucial component of smartgrid management (Tripathy and Prusty, 2021); accurately predicting demand leads to more efficientinventory management and increased revenue (Makridakis et al., 2022).The potential negative impacts of the proposed approach are similar to those of other forecastingmodels. One such danger arises when the limitations of forecasting methods are not taken intoaccount in the context of decision making (e.g., when guiding policy decisions). As forecastingmodels only capture statistical dependencies, they may be misleading when trying to estimateeffects of actions or interventions.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] All claims are supported by the experimental evaluation inSection 5.(b) Did you describe the limitations of your work? [Yes] See Section 6.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 8.(d)Have you read the ethics author’s and review guidelines and ensured that your paper con-forms to them? https://automl.cc/ethics-accessibility/ [Yes] The paper conformsto the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] The paper containsno theoretical results.10(b)Did you include complete proofs of all theoretical results? [N/A] The paper contains notheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] All of the above included in the supplementary material.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] Results are provided in CSV format.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [No]We provide the raw data and describe the procedure in the paper, which should makereproducing the results and figures straightforward.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] The code is properly documented and we made surethat it can be executed in a fresh environment.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes] We use the standard evaluationprotocol: For all datasets, the last prediction_length time steps of each time series areheld out and used to evaluate the forecasts produced by each method. For hyperparameters,see Section A.3.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We carefully made sure that this is the case.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 5.4.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Allmethods use an identical evaluation protocol.(i)Did you compare performance over time? [Yes] We allocate the same runtime budget of 4hours to all methods. An ablation study is performed where the time limit is reduced to 1hour and 10 minutes for AutoGluon.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]For all non-deterministic methods, the experiments are repeated with five random seeds:1,2,3,4,5 .(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] Error metrics produced by all non-deterministic methods include themean and the standard deviation (see Tables 9 and 10).(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [No] These are notavailable for probabilistic time series forecasting.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] The compute infrastructure is describedin Section 5.1. The total runtime of all experiments equals approximately 6000 hours ( ≈#models×# seeds×# of datasets).11(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] We describe the hyperparameter settingsin Appendix A.3, in addition to providing the code that can be used to reproduce the results.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] References for all useddatasets and methods are provided in Section 5.1.(b)Did you mention the license of the assets? [Yes] This paper does not introduce any newpublic assets. The AutoGluon library is released under the Apache 2.0 License.(c)Did you include any new assets either in the supplemental material or as a url? [No] Thispaper does not introduce any new public assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The evaluation was performed using public benchmark datasets.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The evaluation was performed using publicbenchmark datasets.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not use crowdsourcing or conduct research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.ReferencesAlexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T.,Maddix, D. C., Rangapuram, S., Salinas, D., Schulz, J., et al. (2020). GluonTS: Probabilistic andneural time series modeling in Python. The Journal of Machine Learning Research , 21(1):4629–4634.Ali, M. (2020). PyCaret: An open source, low-code machine learning library in Python. https://www.pycaret.org .Assimakopoulos, V. and Nikolopoulos, K. (2000). The Theta model: A decomposition approach toforecasting. International journal of forecasting , 16(4):521–530.Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y., Maddix, D., Turkmen, C., Gasthaus, J.,Bohlke-Schneider, M., Salinas, D., Stella, L., et al. (2022). Deep learning for time series forecasting:Tutorial and literature survey. ACM Computing Surveys , 55(6):1–36.Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., and Günnemann, S. (2022). Multi-objectivemodel selection for time series forecasting. arXiv preprint arXiv:2202.08485 .Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. (1970). Time series analysis: forecastingand control . John Wiley & Sons.12Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from librariesof models. In Proceedings of the twenty-first international conference on Machine learning , page 18.Catlin, C. (2022). AutoTS: Automated time series forecasting. https://github.com/winedarksea/AutoTS .da Silva, F. R., Vieira, A. B., Bernardino, H. S., Alencar, V. A., Pessamilio, L. R., and Barbosa, H.J. C. (2022). Automated machine learning for time series prediction. In 2022 IEEE Congress onEvolutionary Computation (CEC) , pages 1–7. IEEE.Dahl, S. M. J. (2020). TSPO: an autoML approach to time series forecasting . PhD thesis.Deng, D., Karl, F., Hutter, F., Bischl, B., and Lindauer, M. (2022). Efficient automated deep learningfor time series forecasting. In Machine Learning and Knowledge Discovery in Databases: EuropeanConference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III , pages664–680. Springer.Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and accurate AutoML for structured data. arXiv preprint arXiv:2003.06505 .Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. Advances in neural information processing systems , 28.Garza, F., Mergenthaler Canseco, M., Challu, C., and Olivares, K. G. (2022). StatsForecast: Light-ning fast forecasting with statistical and econometric models. https://github.com/Nixtla/statsforecast (v1.15.0).Gastinger, J., Nicolas, S., Stepić, D., Schmidt, M., and Schülke, A. (2021). A study on ensemblelearning for time series forecasting and the need for meta-learning. In 2021 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE.Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J.(2022). AMLB: An AutoML benchmark. arXiv preprint arXiv:2207.12560 .Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and ItsApplication , 1:125–151.Godahewa, R., Bergmeir, C., Webb, G. I., Hyndman, R. J., and Montero-Manso, P. (2021). Monashtime series forecasting archive. In Neural Information Processing Systems Track on Datasets andBenchmarks .Hong, T., Pinson, P., Wang, Y., Weron, R., Yang, D., and Zareipour, H. (2020). Energy forecasting: Areview and outlook. IEEE Open Access Journal of Power and Energy , 7:376–388.Hyndman, R., Koehler, A. B., Ord, J. K., and Snyder, R. D. (2008). Forecasting with exponentialsmoothing: the state space approach . Springer Science & Business Media.Hyndman, R. J. and Athanasopoulos, G. (2018). Forecasting: principles and practice . OTexts.Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: the forecast packagefor R. Journal of statistical software , 27:1–22.Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., and Callot,L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting ,36(1):167–177.13Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H., and Gasthaus, J. (2022). Forecastingwith trees. International Journal of Forecasting , 38(4):1473–1481.Javeri, I. Y., Toutiaee, M., Arpinar, I. B., Miller, J. A., and Miller, T. W. (2021). Improving neuralnetworks for time-series forecasting using data augmentation and AutoML. In 2021 IEEE SeventhInternational Conference on Big Data Computing Service and Applications (BigDataService) , pages1–8. IEEE.Joblib Development Team (2020). Joblib: Running Python functions as pipeline jobs. https://joblib.readthedocs.io/ (v1.2.0).Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). Lightgbm:A highly efficient gradient boosting decision tree. Advances in Neural Information ProcessingSystems , 30.Kurian, J. J., Dix, M., Amihai, I., Ceusters, G., and Prabhune, A. (2021). BOAT: A Bayesian optimiza-tion autoML time-series framework for industrial applications. In 2021 IEEE Seventh InternationalConference on Big Data Computing Service and Applications (BigDataService) , pages 17–24. IEEE.LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable automatic machine learning. In Proceedingsof the AutoML Workshop at ICML , volume 2020.Lim, B., Arık, S. Ö., Loeff, N., and Pfister, T. (2021). Temporal fusion transformers for interpretablemulti-horizon time series forecasting. International Journal of Forecasting , 37(4):1748–1764.Makridakis, S. and Hibon, M. (2000). The M3 competition: Results, conclusions and implications.International journal of forecasting , 16(4):451–476.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 competition: Results, findings,conclusion and way forward. International Journal of Forecasting , 34(4):802–808.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2022). The M5 competition: Background,organization, and implementation. International Journal of Forecasting , 38(4):1325–1336.Meisenbacher, S., Turowski, M., Phipps, K., Rätz, M., Müller, D., Hagenmeyer, V., and Mikut, R.(2022). Review of automated time series forecasting pipelines. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery , 12(6):e1475.Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. (2023). A time series is worth 64 words:Long-term forecasting with transformers. International Conference on Learning Representations .Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., and Vasilakis, C. (2021). Forecasting andplanning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmen-tal decisions. European journal of operational research , 290(1):99–115.Nixtla (2023). MLForecast scalable machine learning for time series forecasting. v0.7.2.Olson, R. S. and Moore, J. H. (2016). TPOT: A tree-based pipeline optimization tool for automatingmachine learning. In Workshop on automatic machine learning , pages 66–74. PMLR.Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. (2020). N-beats: Neural basis expansionanalysis for interpretable time series forecasting.pandas development team (2020). pandas-dev/pandas: Pandas. https://doi.org/10.5281/zenodo.3509134 (v1.5.3).14Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics.Psychological bulletin , 86(3):446.Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and Díaz-Rodríguez, N. (2021). Explainableartificial intelligence (XAI) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 .Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecastingwith autoregressive recurrent networks. International Journal of Forecasting , 36(3):1181–1191.Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., andScialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprintarXiv:2302.04761 .Semenoglou, A.-A., Spiliotis, E., Makridakis, S., and Assimakopoulos, V. (2021). Investigating theaccuracy of cross-learning time series forecasting methods. International Journal of Forecasting ,37(3):1072–1084.Shah, S. Y., Patel, D., Vu, L., Dang, X.-H., Chen, B., Kirchner, P., Samulowitz, H., Wood, D., Bramble,G., Gifford, W. M., et al. (2021). AutoAI-TS: AutoAI for time series forecasting. In Proceedings ofthe 2021 International Conference on Management of Data , pages 2584–2596.Shi, X., Mueller, J., Erickson, N., Li, M., and Smola, A. (2021). Multimodal AutoML on structuredtables with text fields. In 8th ICML Workshop on Automated Machine Learning (AutoML) .Stankeviciute, K., M Alaa, A., and van der Schaar, M. (2021). Conformal time-series forecasting.Advances in Neural Information Processing Systems , 34:6216–6228.Syntetos, A. A., Boylan, J. E., and Disney, S. M. (2009). Forecasting for inventory planning: a 50-yearreview. Journal of the Operational Research Society , 60:S149–S160.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Ting, K. M. and Witten, I. H. (1997). Stacking bagged and dagged models.Tornede, A., Deng, D., Eimer, T., Giovanelli, J., Mohan, A., Ruhkopf, T., Segel, S., Theodorakopoulos,D., Tornede, T., Wachsmuth, H., et al. (2023). AutoML in the age of large language models:Current challenges, future opportunities and risks. arXiv preprint arXiv:2306.08107 .Tripathy, D. S. and Prusty, B. R. (2021). Forecasting of renewable generation for applications insmart grid power systems. In Advances in Smart Grid Power System , pages 265–298. Elsevier.Van Kuppevelt, D., Meijer, C., Huber, F., van der Ploeg, A., Georgievska, S., and van Hees, V. T.(2020). Mcfly: Automated deep learning on time series. SoftwareX , 12:100548.Wang, X., Hyndman, R. J., Li, F., and Kang, Y. (2022). Forecast combinations: an over 50-year review.International Journal of Forecasting .Wen, R., Torkkola, K., Narayanaswamy, B., and Madeka, D. (2017). A multi-horizon quantilerecurrent forecaster. arXiv preprint arXiv:1711.11053 .Xu, C. and Xie, Y. (2021). Conformal prediction interval for dynamic time-series. In InternationalConference on Machine Learning , pages 11559–11569. PMLR.Zimmer, L., Lindauer, M., and Hutter, F. (2021). Auto-PyTorch: Multi-fidelity metalearning forefficient and robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence ,43(9):3079–3090.15A Supplementary MaterialsA.1 Evaluation MetricsMASE. Mean absolute scaled error is the standard metric for evaluating the accuracy of pointforecasts.MASE =1NN∑︁i=11HÍHh=1|yi,T+h−ˆyi,T+h|ÍT−st=1|yi,t+s−yi,t|MASE is scale-invariant and does not suffer from the limitations of other metrics, such as beingundefined when the target time series equals zero (Hyndman and Athanasopoulos, 2018). Wecompute the metric using the median (0.5 quantile) forecast produced by each model.wQL. Weighted quantile loss for a single quantile level qis defined aswQL[q]=2ÍNi=1ÍHh=1hq·max(yi,T+h−ˆyqi,T+h,0)+(1−q)·max(ˆyqi,T+h−yi,T+h,0)iÍNi=1ÍHh=1|yi,T+h|In our experiments, we report the mean wQL averaged over 9 quantile levels Q={0.1,0.2,...,0.9}.wQL =1|Q|∑︁q∈QwQL[q]A.2 ReproducibilityWe ran all experiments using AutoMLBenchmark (Gijsbers et al., 2022). We provide afork of AMLB that includes all scripts necessary to reproduce the results from our pa-per in the following GitHub repository https://github.com/shchur/automlbenchmark/tree/autogluon-timeseries-automl23/autogluon_timeseries_automl23 .A.3 Model ConfigurationWe trained the baseline models DeepAR, TFT, AutoARIMA, AutoETS, AutoTheta with the defaulthyperparameter configurations provided by the respective libraries. For DeepAR and TFT, thelastprediction_length time steps of each time series were reserved as a validation set. Bothmodels were trained for the full duration of 4 hours, saving the parameters and evaluating thevalidation loss at each epoch. The parameters achieving the lowest validation loss were then usedfor prediction. No HPO was performed for these two models, as AutoPyTorch already trains similardeep learning models with HPO.For AutoPyTorch, we used the reference implementation by the authors.3We set the tar-get metric to "mean_MASE_forecasting" ,budget_type="epochs" ,min_budget=5 ,max_budget=50 ,and resampling_strategy=HoldoutValTypes.time_series_hold_out_validation . We also settorch_num_threads to 16 (the number of vCPU cores).In our experiments, we used AG–TS v0.8.2, the latest release at the time of publication. Weused the "best_quality" presets and set eval_metric to either "MASE" or"mean_wQuantileLoss" ,depending on the experiment. All other parameters of the TimeSeriesPredictor were set totheir default values. The "best_quality" presets include the following models: AutoETS, Au-toARIMA, Theta (from StatsForecast), DeepAR, PatchTST, TFT (from GluonTS), DirectTabular,RecursiveTabular (wrappers around AutoGluon–Tabular and MLForecast), plus the baseline meth-ods Naive and SeasonalNaive. The non-default hyperparameters of the individual models used bythebest_quality presets are provided in Table 6.3https://github.com/dengdifan/Auto-PyTorch/blob/ecml22_apt_ts/examples/APT-TS/APT_task.py16The guiding principle for developing the presets for AG–TS can be summarized as “keep defaultswhenever possible, except the cases where the defaults are clearly suboptimal”. For example, wesetallowmean=True for AutoARIMA to allow this model to handle time series with non-zeromean. For deep learning models, we increase the batch size from 32 to 64 since larger batch sizestypically lead to faster convergence for all deep learning models. The context_length is capped ata minimum value because the default setting context_length=prediction_length can result inmodels that ignore most of the history if prediction_length is very short. For PatchTST, we setthecontext_length to the value used in the respective publication (Nie et al., 2023).The versions of frameworks used in our experiments are listed in Table 7.Table 6: Non-default hyperparameters that AutoGluon sets for the underlying models. The remainingparameters are all set to their defaults in the respective libraries. Models not listed here(Naive, SeasonalNaive, AutoETS, DirectTabular, Theta) have all their hyperparameters set tothe default values.Model Hyperparameter ValueAutoARIMA allowmean Trueapproximation TrueDeepAR batch_size 64context_length max(10, 2 * prediction_length)num_samples 250PatchTST batch_size 64context_length 96TFT batch_size 64context_length max(64, 2 * prediction_length)RecursiveTabular tabular_hyperparameters {"GBM", "NN_TORCH"}Table 7: Versions of the frameworks used during evaluation.Framework VersionAutoGluon 0.8.2AutoPyTorch 0.2.1GluonTS 0.13.2MLForecast 0.7.3StatsForecast 1.5.0Python 3.9PyTorch 1.13.1+cpu17Table 8: Statistics of the benchmark datasets used in our experimental evaluation. Frequency isrepresented by pandas offset aliases. Seasonality depends on the frequency, and is used toconfigure statistical models and compute the MASE metric.Dataset # series # time steps Prediction length Frequency SeasonalityCar Parts 2,674 104,286 12 M 12CIF 2016 72 6,244 12 M 12COVID 266 48,412 30 D 7Electricity Hourly 321 8,428,176 48 H 24Electricity Weekly 321 47,508 8 W 1FRED-MD 107 76,612 12 M 12Hospital 767 55,224 12 M 12KDD Cup 2018 270 2,929,404 48 H 24M1 Monthly 617 44,892 18 M 12M1 Quarterly 203 8,320 8 Q 4M1 Yearly 181 3,429 6 Y 1M3 Monthly 1,428 141,858 18 M 12M3 Other 174 11,933 8 Q 1M3 Quarterly 756 30,956 8 Q 4M3 Yearly 645 14,449 6 Y 1M4 Daily 4,227 9,964,658 14 D 7M4 Hourly 414 353,500 48 H 24M4 Monthly 48,000 10,382,411 18 M 12M4 Quarterly 24,000 2,214,108 8 Q 4M4 Weekly 359 366,912 13 W 1M4 Yearly 22,974 707,265 6 Y 1NN5 Daily 111 81,585 56 D 7NN5 Weekly 111 11,655 8 W 1Pedestrian Counts 66 3,129,178 48 H 24Tourism Monthly 366 100,496 24 M 12Tourism Quarterly 427 39,128 8 Q 4Tourism Yearly 518 10,685 4 Y 1Vehicle Trips 262 45,253 7 D 7Web Traffic Weekly 145,063 15,376,678 8 W 118Table 9: Point forecast accuracy, as measured by MASE (lower is better). For non-deterministic methods(DeepAR, TFT, AutoPyTorch, AutoGluon) we report the mean and standard deviation of thescores computed over 5 random seeds. "d.n.f." denotes cases where a method did not generatea forecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 1.127 1.118 1.133 1.208 1.052 0.749 (0.001) 0.751 (0.002) 0.746 (0.0) 0.747 (0.0)CIF 2016 1.289 1.069 0.898 1.006 0.945 1.278 (0.088) 1.372 (0.085) 1.023 (0.069) 1.073 (0.006)COVID 8.977 6.029 5.907 7.719 5.884 7.166 (0.334) 5.192 (0.211) 4.911 (0.086) 5.805 (0.0)Electricity Hourly 1.405 d.n.f. 1.465 d.n.f. d.n.f. 1.251 (0.006) 1.389 (0.025) 1.420 (0.123) 1.227 (0.003)Electricity Weekly 3.037 3.009 3.076 3.113 3.077 2.447 (0.211) 2.861 (0.122) 2.322 (0.277) 1.892 (0.0)FRED-MD 1.101 0.478 0.505 0.564 0.498 0.634 (0.038) 0.901 (0.086) 0.682 (0.058) 0.656 (0.0)Hospital 0.921 0.820 0.766 0.764 0.753 0.771 (0.008) 0.814 (0.012) 0.770 (0.003) 0.741 (0.001)KDD Cup 2018 0.975 d.n.f. 0.988 1.010 d.n.f. 0.841 (0.036) 0.844 (0.065) 0.764 (0.047) 0.709 (0.026)M1 Monthly 1.314 1.152 1.083 1.092 1.045 1.117 (0.029) 1.534 (0.063) 1.278 (0.115) 1.235 (0.001)M1 Quarterly 2.078 1.770 1.665 1.667 1.622 1.742 (0.028) 2.099 (0.108) 1.813 (0.056) 1.615 (0.0)M1 Yearly 4.894 3.870 3.950 3.659 3.769 3.674 (0.161) 4.318 (0.122) 3.407 (0.078) 3.371 (0.007)M3 Monthly 1.146 0.934 0.867 0.855 0.845 0.960 (0.017) 1.062 (0.04) 0.956 (0.083) 0.822 (0.0)M3 Other 3.089 2.245 1.801 2.009 1.769 2.061 (0.182) 1.926 (0.028) 1.871 (0.024) 1.837 (0.004)M3 Quarterly 1.425 1.419 1.121 1.119 1.096 1.198 (0.037) 1.176 (0.036) 1.180 (0.032) 1.057 (0.002)M3 Yearly 3.172 3.159 2.695 2.608 2.627 2.694 (0.096) 2.818 (0.019) 2.691 (0.026) 2.520 (0.002)M4 Daily 1.452 1.153 1.228 1.149 1.145 1.145 (0.026) 1.176 (0.018) 1.152 (0.009) 1.156 (0.0)M4 Hourly 1.193 1.029 1.609 2.456 1.157 1.484 (0.151) 3.391 (0.442) 1.345 (0.404) 0.807 (0.001)M4 Monthly 1.079 0.812 0.803 0.834 0.780 0.933 (0.01) 0.947 (0.005) 0.851 (0.025) 0.782 (0.0)M4 Quarterly 1.602 1.276 1.167 1.183 1.148 1.367 (0.171) 1.277 (0.015) 1.176 (0.022) 1.139 (0.0)M4 Weekly 2.777 2.355 2.548 2.608 2.375 2.418 (0.026) 2.625 (0.038) 2.369 (0.177) 2.035 (0.001)M4 Yearly 3.966 3.720 3.077 3.085 3.032 3.858 (0.694) 3.220 (0.097) 3.093 (0.041) 3.019 (0.001)NN5 Daily 1.011 0.935 0.870 0.878 0.859 0.812 (0.01) 0.789 (0.004) 0.807 (0.021) 0.761 (0.004)NN5 Weekly 1.063 0.998 0.980 0.963 0.977 0.915 (0.085) 0.884 (0.012) 0.865 (0.025) 0.860 (0.0)Pedestrian Counts 0.369 d.n.f. 0.553 d.n.f. d.n.f. 0.309 (0.005) 0.373 (0.01) 0.354 (0.024) 0.312 (0.009)Tourism Monthly 1.631 1.585 1.529 1.666 1.469 1.461 (0.025) 1.719 (0.08) 1.495 (0.009) 1.442 (0.0)Tourism Quarterly 1.699 1.655 1.578 1.648 1.539 1.599 (0.062) 1.830 (0.047) 1.647 (0.034) 1.537 (0.002)Tourism Yearly 3.552 4.044 3.183 2.992 3.231 3.476 (0.165) 2.916 (0.197) 3.004 (0.053) 2.946 (0.007)Vehicle Trips 1.302 1.427 1.301 1.284 1.203 1.162 (0.016) 1.227 (0.02) 1.162 (0.019) 1.113 (0.0)Web Traffic Weekly 1.066 1.189 1.207 1.108 1.068 N/A 0.973 (0.022) 0.962 (0.01) 0.938 (0.0)19Table 10: Probabilistic forecast accuracy, as measured by wQL (lower is better). For non-deterministicmethods (DeepAR, TFT, AutoGluon) we report the mean and standard deviation of the scorescomputed over 5 random seeds. "d.n.f." denotes cases where a method did not generate aforecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoGluonCar Parts 1.717 1.589 1.338 1.367 1.324 0.963 (0.009) 0.878 (0.004) 0.923 (0.0)CIF 2016 0.031 0.017 0.039 0.027 0.028 0.114 (0.024) 0.010 (0.002) 0.019 (0.0)COVID 0.140 0.030 0.046 0.094 0.046 0.072 (0.02) 0.031 (0.003) 0.030 (0.0)Electricity Hourly 0.108 d.n.f. 0.100 d.n.f. d.n.f. 0.081 (0.002) 0.097 (0.001) 0.076 (0.0)Electricity Weekly 0.141 0.138 0.144 0.146 0.141 0.123 (0.041) 0.118 (0.011) 0.088 (0.0)FRED-MD 0.104 0.056 0.050 0.057 0.054 0.054 (0.021) 0.114 (0.011) 0.056 (0.0)Hospital 0.062 0.058 0.053 0.055 0.053 0.053 (0.001) 0.054 (0.001) 0.051 (0.0)KDD Cup 2018 0.489 d.n.f. 0.550 0.553 d.n.f. 0.363 (0.014) 0.488 (0.054) 0.323 (0.014)M1 Monthly 0.153 0.146 0.163 0.159 0.152 0.136 (0.008) 0.224 (0.016) 0.135 (0.0)M1 Quarterly 0.119 0.088 0.081 0.082 0.083 0.084 (0.003) 0.093 (0.006) 0.090 (0.0)M1 Yearly 0.184 0.160 0.139 0.137 0.142 0.142 (0.029) 0.127 (0.004) 0.134 (0.001)M3 Monthly 0.124 0.102 0.093 0.095 0.092 0.098 (0.001) 0.109 (0.003) 0.089 (0.0)M3 Other 0.047 0.035 0.032 0.035 0.031 0.036 (0.002) 0.033 (0.001) 0.031 (0.0)M3 Quarterly 0.083 0.079 0.069 0.070 0.068 0.073 (0.001) 0.071 (0.001) 0.065 (0.0)M3 Yearly 0.141 0.162 0.129 0.128 0.128 0.117 (0.002) 0.133 (0.001) 0.114 (0.0)M4 Daily 0.030 0.023 0.025 0.023 0.023 0.023 (0.0) 0.023 (0.0) 0.022 (0.0)M4 Hourly 0.039 0.036 0.070 0.041 0.037 0.065 (0.03) 0.038 (0.002) 0.030 (0.001)M4 Monthly 0.109 0.085 0.085 0.088 0.082 0.092 (0.003) 0.089 (0.001) 0.081 (0.0)M4 Quarterly 0.099 0.082 0.079 0.079 0.076 0.084 (0.005) 0.083 (0.001) 0.075 (0.0)M4 Weekly 0.073 0.050 0.052 0.053 0.050 0.046 (0.001) 0.049 (0.001) 0.041 (0.0)M4 Yearly 0.138 0.130 0.111 0.115 0.109 0.124 (0.006) 0.116 (0.004) 0.104 (0.0)NN5 Daily 0.292 0.169 0.162 0.188 0.164 0.148 (0.002) 0.145 (0.001) 0.140 (0.0)NN5 Weekly 0.142 0.090 0.088 0.090 0.089 0.084 (0.007) 0.085 (0.001) 0.078 (0.0)Pedestrian Counts 0.675 d.n.f. 0.764 d.n.f. d.n.f. 0.230 (0.006) 0.261 (0.008) 0.238 (0.013)Tourism Monthly 0.088 0.095 0.101 0.091 0.085 0.086 (0.005) 0.103 (0.01) 0.083 (0.0)Tourism Quarterly 0.099 0.098 0.070 0.061 0.070 0.068 (0.002) 0.083 (0.005) 0.072 (0.0)Tourism Yearly 0.170 0.156 0.157 0.176 0.155 0.141 (0.016) 0.102 (0.006) 0.152 (0.0)Vehicle Trips 0.112 0.100 0.115 0.120 0.103 0.090 (0.002) 0.099 (0.005) 0.087 (0.0)Web Traffic Weekly 0.936 0.475 8·10130.503 0.474 N/A 0.223 (0.011) 0.225 (0.0)20Table 11: Average run time of each method (in minutes).Dataset SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 0.1 2.4 0.6 0.7 3.3 6.9 9.2 240.3 17.4CIF 2016 0.1 0.4 0.5 0.6 1.3 4.1 6.2 240.2 16.7COVID 0.1 1.4 0.5 0.7 2.3 7.9 8.8 240.4 29.3Electricity Hourly 0.2 >360 21.6 >360 >360 10.4 19.5 240.4 61.2Electricity Weekly 0.2 0.3 0.4 0.5 1.0 3.1 6.6 240.2 14.9FRED-MD 0.1 2.4 0.7 0.6 3.4 6.8 5.5 240.2 16.8Hospital 0.1 0.9 0.7 0.7 2.1 4.6 7.6 240.2 17.4KDD Cup 2018 0.1 >360 16.3 22.8 >360 12.4 11.9 240.3 56.0M1 Monthly 0.1 1.5 0.8 0.7 2.7 5.5 6.2 240.2 21.6M1 Quarterly 0.1 0.3 0.5 0.7 1.3 5.9 5.4 240.2 15.6M1 Yearly 0.1 0.3 0.4 0.4 0.9 4.2 5.2 240.2 12.9M3 Monthly 0.1 4.0 1.0 0.8 5.8 5.1 5.9 240.3 24.2M3 Other 0.1 0.3 0.4 0.4 0.9 5.0 6.0 240.2 13.6M3 Quarterly 0.1 0.5 0.6 0.7 1.6 4.6 6.0 240.3 15.7M3 Yearly 0.1 0.4 0.5 0.4 1.0 5.9 5.4 240.2 12.7M4 Daily 0.2 28.5 33.0 25.3 82.3 6.8 8.4 240.3 68.7M4 Hourly 0.1 84.9 1.8 0.8 89.5 9.2 10.9 240.2 51.2M4 Monthly 0.3 296.0 37.6 7.7 340.3 4.9 7.9 242.0 112.1M4 Quarterly 0.2 15.7 6.2 1.6 23.2 4.7 7.6 240.9 62.3M4 Weekly 0.1 0.6 0.5 1.3 2.2 5.6 7.8 240.3 20.8M4 Yearly 0.2 4.3 0.8 0.7 5.6 4.2 6.1 240.8 35.6NN5 Daily 0.1 2.5 0.5 0.6 3.3 7.3 10.9 240.3 37.4NN5 Weekly 0.1 0.3 0.4 0.4 1.0 3.6 6.4 240.2 13.7Pedestrian Counts 0.1 >360 4.9 >360 >360 13.5 16.7 240.7 56.4Tourism Monthly 0.1 10.2 0.8 0.7 13.1 4.4 7.6 240.2 26.0Tourism Quarterly 0.1 0.9 0.6 0.7 1.8 3.6 6.3 240.2 14.6Tourism Yearly 0.1 0.3 0.4 0.4 1.0 3.5 5.8 240.3 12.4Vehicle Trips 0.1 1.1 0.6 0.7 2.2 5.1 7.3 240.2 16.0Web Traffic Weekly 0.2 42.3 3.7 6.2 52.8 N/A 8.3 260.5 106.021
Xcl0bs0dmv
n1ODM_LrRs
KDD.org/2023/Workshop/epiDAMIK
2023
Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract)
["Simon Witzke", "Noel Danz", "Katharina Baum", "Bernhard Y Renard"]
The COVID-19 pandemic has had a considerable global impact over the last few years. Many efforts were made to understand and estimate its development. The availability of large amounts of data, including mobility data, has led to numerous Graph Neural Networks (GNN) being proposed to leverage this data and forecast case numbers for the short-term future. However, information about trend developments, especially where trends reverse directions, is crucial in informing decisions. GNNs may be able to use information from regions where trends change first to improve predictions for locations with delays. We consider the first omicron wave in Germany at the end of 2021 and compare a heterogeneous GNN using mobility data with a model without spatial information. We observe that, for this period, mobility data significantly improve forecasts and specifically that improvements occur earlier in time. Using GNNs and mobility data enables leveraging information from counties affected earlier to improve forecasts for counties affected later. We conclude that such performance improvements could be transferred to counties with earlier change points by also including neighboring nations in the graph structure. Further, we emphasize the need for systematic contextual evaluation of GNN-based models for forecasting pandemic trends.
["mobility data", "trend estimation", "graph neural networks", "covid-19"]
ABSTRACTThe COVID-19 pandemic has had a considerable global impactover the last few years. Many efforts were made to understandand estimate its development. The availability of large amounts ofdata, including mobility data, has led to numerous Graph NeuralNetworks (GNN) being proposed to leverage this data and forecastcase numbers for the short-term future. However, information abouttrend developments, especially where trends reverse directions, iscrucial in informing decisions. GNNs may be able to use informationfrom regions where trends change first to improve predictionsfor locations with delays. We consider the first omicron wave inGermany at the end of 2021 and compare a heterogeneous GNNusing mobility data with a model without spatial information. Weobserve that, for this period, mobility data significantly improveforecasts and specifically that improvements occur earlier in time.Using GNNs and mobility data enables leveraging information fromcounties affected earlier to improve forecasts for counties affectedlater. We conclude that such performance improvements could betransferred to counties with earlier change points by also includingneighboring nations in the graph structure. Further, we emphasizethe need for systematic contextual evaluation of GNN-based modelsfor forecasting pandemic trends.KEYWORDSmobility data, trend estimation, graph neural networks, covid-19ACM Reference Format:Simon Witzke, Noel Danz, Katharina Baum, and Bernhard Y. Renard. 2023.Mobility data improve forecasting of COVID-19 incidence trends usingGraph Neural Networks (Extended Abstract). In epiDAMIK 2023: 6th epi-DAMIK ACM SIGKDD International Workshop on Epidemiology meets DataMining and Knowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 5 pages.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).1 INTRODUCTIONSpreading from Wuhan, China, in late 2019, the COVID-19 pan-demic has held humanity in its grasp until recently [35]. The pan-demic has had drastic consequences, with estimates of almost fifteenmillion excess deaths only in 2020 and 2021 [20] and considerableeconomic and social damages [5]. The global scale of the pandemicled to large amounts of data on different modalities related to epi-demic spread being shared, such as mobility and sequencing data.These have been made available to support the development offorecasting methods intended to inform decision makers concern-ing potential interventions [21, 23]. Human mobility is a centraldriver in the geographical spread of epidemics caused by air-bornediseases [3], enabling the virus to travel between regions and, inthe case of COVID-19, rapidly infecting most of the world. Dur-ing the pandemic, researchers have combined mobility networkswith mechanistic models to understand the influences of changedmobility behavior and further highlight its importance for the pan-demic’s development [4, 30]. Schlosser et al.[30] have shown thatlockdowns strongly impacted mobility structures during the firstCOVID-19 wave in Germany and that the associated reduction inmobility can slow the virus’ geographical spread.Various spatio-temporal approaches using Recurrent Neural Net-works and EXtreme Gradient Boosting have been proposed to fore-cast county-level COVID-19 metrics [11, 18, 22, 34]. However, recentadvances in deep graph learning have led to Graph Neural Networks(GNNs) gaining popularity in domains as diverse as traffic forecast-ing [12] or computational chemistry [26]. Human mobility betweengeographical regions can naturally be represented as graphs, wherenodes represent locations, such as counties, and edges movementsbetween them. Consequently, numerous approaches that try lever-aging the power of GNNs to forecast COVID-19-related metrics,such as cases, deaths, and hospitalizations, have been proposed[9, 10, 13, 24]. These approaches have shown promising results inproviding insights into the short-term development of the COVID-19 pandemic. However, informing decision makers about a trendforecast rather than exact numbers might be more beneficial. Com-municating trends can be easier than directly communicating casesor deaths. Trends are strong indicators of relevant changes in theepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.pandemic development and a need for interventions, and their in-terpretation is straightforward. For example, the US Governmentused a 14-day downward trend in COVID-19 cases as a conditionfor potential re-openings [6]. For this purpose, systematically eval-uating GNN-based methods’ ability to correctly forecast trends isessential. Accurate forecasts are especially relevant for phases withchange points, where locations successively experience a changein their trend, such as the peak of a wave.There are secondary time series modalities, such as Googlesearch trends and smart body temperature sensors. These modali-ties potentially reflect changes in trends faster than case numbers.This has been successfully leveraged by Kogan et al.[15] and Stol-erman et al.[31] to develop early-warning systems in the UnitedStates that detect such trend signals up to weeks in advance. Simi-larly, GNNs may utilize nodes with leading time series to improveforecasts for nodes with lagging time series by passing informationvia the underlying graph, i.e., information from locations wherechanges occur earlier might be beneficial for forecasting locationswhere similar changes are delayed.In this work, we investigate whether mobility data can improveforecasts of 14-day linear trends of the COVID-19 incidence. Weevaluate county-level forecasts of a heterogeneous GNN for loca-tions experiencing a change point during the second half of the firstomicron wave at the end of 2021 in Germany [19], where cases arebeginning to decline. We further analyze whether our GNN can uti-lize information from counties with leading changes for forecastingcounties that experience similar changes later. Finally, we discussthe implications for developing and evaluating future GNN-basedmethods for pandemic forecasting.2 MATERIALS AND METHODS2.1 Graph ConstructionInspired by Kapoor et al.[13], we construct heterogeneous spatio-temporal graph samples with distinct edge types for spatial andtemporal connections. We design each graph sample to contain 15weighted mobility subgraphs, representing movements betweenthe 400 German counties as nodes at successive points in time,t−14,...,t . We use spatial edges to express these mobility graphs.The directed but unweighted temporal edges then link each countyat a time point t−14,...,t to its representations on up to sevenprevious days, connecting the spatial components of the graph.Therefore, each graph sample represents a single point in timewhile still including historical information from previous days.We use mobility data [16, 28] to build the spatial edges. The useddataset contains the daily movements of nearly one million mobilephone users in Germany and is non-public due to privacy concerns.The number of mobile phones sending location information variesdaily, so we normalize the movements by the daily device count andthen re-scale all movements with the average daily device count.We find that the daily mobility networks’ adjacency matrices areprimarily symmetric, i.e., the opposing edges are highly similar.Therefore, we convert the directed into undirected graphs by sum-ming the weights of the edges in both directions. Finally, we denoisethe mobility graphs by removing 30% of the non-zero edges withthe lowest edge weights, where edges on the thresholding boundaryare removed randomly.The node features of our graph consist of dynamic and staticfeatures. We obtain data on the COVID-19 case numbers startingin January 2020 from the Robert Koch Institute [27] and aggregatethe data on the county level, resulting in a total of 400 time series.Countering reporting inaccuracies, we calculate the county-level7-day incidence, a right-aligned 7-day moving sum normalized bythe county population and then scaled by 100,000. Each node attimethas the 7-day incidence of the previous seven days until dayt−6as node features. Additionally, we include a cyclical sine/cosineencoding [33] for the weekday and month. This cyclical encodingaims to improve the learning of short and long-term seasonal effects.Lastly, we use the population density of each county as the onlystatic feature. We collect the census data, such as population sizeand population density, from the German Federal Office of Statistics[17].As prediction targets, we use 14-day trends in the COVID-19incidence obtained from linear approximations. A linear approxi-mation has the advantage that it allows us to estimate the strengthof a trend and not only its direction compared to converting theproblem to a classification task. For this purpose, we smooth the7-day incidence time series for the whole dataset to remove remain-ing artifacts, using a center-aligned 7-day moving average. For eachcounty and time point t, we perform a linear regression on thissmoothed time series with the known time series values at timepointst+1,...,t+14as the dependent variable and the number ofdays from time tinto the future h∈1,...,14as the independentvariable. We then use the slope of this regression, representing alinear trend of the COVID-19 incidence over the next 14 days fromtime pointt, as the ground truth for our forecasts.2.2 Graph Neural NetworkOur GNN is similar to the network used by Kapoor et al.[13] andbased on Kipf and Welling’s[14] graph convolutional layer. Weextend this architecture by using relational graph convolutionallayers (R-GCN), an extension for heterogeneous graphs proposed bySchlichtkrull et al.[29] that allows feature updates via multiple edgetypes, where each edge type has its own set of learned parameters.First, the node features are passed through an initial encoding layerfollowed by a dropout with a probability of 0.2. Next is a three-layer GNN, each with a dropout probability of 0.5. Like Kapooret al.[13], we add skip-connections and concatenate the output ofthe initial encoding layer to the output of each R-GCN layer topreserve local information and counter over-smoothing. Lastly, weuse a multi-layer perceptron with a single hidden layer to producethe final prediction. We note that for each graph sample, we onlyuse the embeddings of the most recent spatial subgraph to obtain asingle forecast for all 400 counties. All layers have 32 hidden unitsand use a ReLU as the non-linear activation function, except forthe last linear layer, which has 16 hidden units. The output layeruses no activation function, allowing positive and negative trendpredictions. We implement our GNN in PyTorch [25] and PyTorchGeometric [7].2.3 Training setupWe use a mean squared error (MSE) regression loss and an ADAMoptimizer with a learning rate of 1.33e−4and weight decay of 1e−5.Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract) epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAWe employ a batch size of 128 and train for a maximum of 250epochs with early stopping, with a patience of 10 epochs withoutimprovement.We adopt a rolling-origin evaluation approach [32] where weextend the training set by the test sample of the previous iteration.We test from November 10, 2021, until December 19, 2021, with allprevious data being used for training and validation. We use alldata from January 15, 2020, for training and validation. Therefore,the training and validation set contains 665 samples for the firsttest sample and grows to 704 samples for the last test sample. Ourvalidation set consists of the day after the last training sample andis used for early stopping and model selection. We always havea 17-day gap between the validation and test samples to avoidinformation leakage to the test sample while also mimicking areal-world situation where we use all the available data to make aforecast.To counter the sparseness of training data and avoid conditioningour model too strongly on periods that contain limited information,such as summer periods with low incidences, we oversample thetraining set by multiplicating specific samples. We combine theglobal German COVID-19 incidence time series with an exponentialfunction, assigning higher importance to more recent dates. Weconvert the result into a discrete probability distribution whereeach sample is assigned a probability. We then draw from thisdistribution with replacement. We use an oversampling rate of 10.2.4 Evaluation ScenarioWhile we train our models using an MSE regression loss, this metricis not optimal for evaluating our models’ performance. Differentcounties experience the considered phase of the pandemic differ-ently and a metric dependent on the range of the trend values couldbias our evaluation.Therefore we evaluate the models’ performance using the MeanAbsolute Percentage Error (MAPE) (Appendix A.1) and the sym-metric Mean Absolute Percentage Error (sMAPE) (Appendix A.2).Further, while MAPE and sMAPE provide insight into the error inthe magnitude of the trend, we are also interested in the model’sability to predict the direction of the trend. For this purpose, weevaluate our models with an adaption of the Mean DirectionalAccuracy (MDA) (Appendix A.3).To investigate if our models can leverage mobility data to im-prove predictions in counties with lagging change points, we con-sider the first omicron wave at the end of 2021, from November 10to December 19. For this period, we extract the date on which eachcounty’s corresponding smoothed COVID-19 7-day incidence timeseries has its maximum, i.e., its peak. We consider this the pointwhen the trend will likely change from positive to negative as theincidence begins to decline.After obtaining the peak for each county, we use a 7-day movingwindow to evaluate how the prediction performance develops asmore counties reach their peak. For each window, we collect allcounties that have their peak inside the current window. We thencompute all metrics for these counties using the forecast and groundtruth of their peak date and shift the window by one day.We conduct additional experiments with the same evaluationsetup but replace the adjacency matrices of the mobility subgraphswith identity matrices to verify that difference in performance canbe accounted to the mobility data. Thus, we train models with thesame number of parameters but do not include spatial information.3 RESULTSFor all experiments, there is a clear performance improvement asmore counties reach their peak over time that is consistent acrossall metrics. This improvement is more pronounced for models withmobility data than those without spatial information (see Figure 1).To verify that our findings that models with mobility data performbetter than models without spatial information are significant, weconduct paired one-tail Wilcoxon signed-rank tests with signifi-cance level α=0.05for all metrics. After correcting for multipletesting using the Benjamini-Hochberg method [1], we find that forMAPE ( p-value≈0.021), sMAPE ( p-value≈2.738e−6), and MDA(p-value≈6.661e−6) the mobility-conditioned models significantlyoutperform the models without spatial information.Nov 15 Dec 01 Dec 150.000.250.500.751.00sMAPEModels with mobility data Models without spatial informationA0.000.250.500.751.00Nov 15 Dec 01 Dec 15Date (2021)MDABFigure 1: (A) sMAPE (lower is better) for peaks in 7-day mov-ing windows. The performance improves over time for bothexperiments before declining. The effect occurs earlier and isgreater for models with mobility data. (B) The MDA (higheris better) almost mirrors the sMAPE’s behavior. This suggeststhat while more recent training data improve predictions,this effect is amplified by mobility data.Figure 1 (A, B) clearly shows that the improvements in sMAPEand MDA happen earlier and are more extreme for the modelswith mobility data. This difference indicates that the improvementscannot solely be attributed to the fact that the models have seenmore recent and relevant data and are therefore conditioned better.Furthermore, due to the 17-day gap to avoid information leakage,the model is unlikely to have seen any recent negative trends for acounty before its peak during training. However, as earlier coun-ties are already past their peak and are experiencing decreasingepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.incidences, they can share this information with counties wherepeaks occur later.4 DISCUSSION AND CONCLUSIONWe find that mobility data significantly improve forecasting perfor-mance compared to experiments without spatial information. Wehave two hypotheses for our observations. Firstly, the structuralinformation in the mobility networks and their variation over timemight lead to improved predictions. Secondly, our GNN model canpick up information from counties that experience changes, suchas beginning downtrends in incidences, earlier and use them forforecasts of counties where these changes occur delayed. With ourcurrent experimental setup, we are unable to disentangle these hy-potheses. However, further experiments, for example, using staticspatial connections, could provide insights.Counties that are the first to experience a change in trend seemunable to benefit from mobility data. However, these counties mightbe of the highest interest as changes occur earlier and are likelymore vital indicators of the need for interventions. Therefore itcould be valuable to include additional nodes representing neighbor-ing nations in our graph to leverage potentially leading informationfrom them.Our analysis suggests that systematically analyzing models’ ca-pabilities of making accurate trend forecasts during times of interestis highly valuable. Different components, such as the magnitudeand direction of a trend, are relevant for providing a holistic un-derstanding in an epidemiological context. It could be helpful toextend evaluations by applying post-hoc explainability methodsfor graph-based models to understand better how the models maketheir predictions. Such explanations could provide insights for epi-demiologists to construct hypotheses regarding the pandemic’scurrent state and spreading behavior.We showed the capabilities of a heterogeneous spatio-temporalGNN in leveraging mobility data to improve forecasts for countieswith lagging time series directly after a change in trend. We suggestthat including more global information via nodes representing othernations could extend this effect to leading counties where changesoccur first. Currently, we evaluate single rolling-origin evaluationexperiments for the change point of the COVID-19 pandemic inGermany. To substantiate our findings, we will consider differentphases of the pandemic, including change points with a switch toupward trends. Furthermore, we will run experiments repeatedlyto verify the robustness of our results and establish confidencebounds.ACKNOWLEDGMENTSThis work was supported by the German BMWK through the DAKI-FWS project [01MK21009E to B.Y.R.].
iUZgRrEHXdo
Mobility data added to GNN architecture to improve outbreak forecasting
3: Marginally above acceptance threshold
### Summary This paper seeks to improve graph neural network driven epidemic forecasting by incorporating mobility data. They focus on forecasting the reversal of trends using this additional information. The effect of the mobility data is isolated by comparing the results with a similar GNN without mobility data. They emphasize the value that mobility data holds in this application compared to other sources of data. Although their results show promise for limited settings, they do not consider settings in which the trends changes in direction from downwards to upwards, which is arguably more relevant for disease forecasting. ### Strengths - The authors test under real-world assumptions of data availability lags by using a 17-day gap between their training and testing sets. - The contribution is simple and clearly stated, as well and the hypotheses they introduce concerning their results. ### Weaknesses - Limited to a linear trend of incidence over 14 days. It is unclear whether this means the method only has the capability to detect changes in trend. - They only evaluate an upwards to downwards trend, which may affect the viability of the method until the opposite is verified. ### Suggestions - If possible, it would help to have a small GNN architecture diagram - When you “design each graph sample to contain 15 temporal connections…”, does each “graph sample” represent a single time point? - Elaborate more on the purpose of the cyclical sine/cosine encoding
3: The reviewer is fairly confident that the evaluation is correct
n1ODM_LrRs
KDD.org/2023/Workshop/epiDAMIK
2023
Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract)
["Simon Witzke", "Noel Danz", "Katharina Baum", "Bernhard Y Renard"]
The COVID-19 pandemic has had a considerable global impact over the last few years. Many efforts were made to understand and estimate its development. The availability of large amounts of data, including mobility data, has led to numerous Graph Neural Networks (GNN) being proposed to leverage this data and forecast case numbers for the short-term future. However, information about trend developments, especially where trends reverse directions, is crucial in informing decisions. GNNs may be able to use information from regions where trends change first to improve predictions for locations with delays. We consider the first omicron wave in Germany at the end of 2021 and compare a heterogeneous GNN using mobility data with a model without spatial information. We observe that, for this period, mobility data significantly improve forecasts and specifically that improvements occur earlier in time. Using GNNs and mobility data enables leveraging information from counties affected earlier to improve forecasts for counties affected later. We conclude that such performance improvements could be transferred to counties with earlier change points by also including neighboring nations in the graph structure. Further, we emphasize the need for systematic contextual evaluation of GNN-based models for forecasting pandemic trends.
["mobility data", "trend estimation", "graph neural networks", "covid-19"]
ABSTRACTThe COVID-19 pandemic has had a considerable global impactover the last few years. Many efforts were made to understandand estimate its development. The availability of large amounts ofdata, including mobility data, has led to numerous Graph NeuralNetworks (GNN) being proposed to leverage this data and forecastcase numbers for the short-term future. However, information abouttrend developments, especially where trends reverse directions, iscrucial in informing decisions. GNNs may be able to use informationfrom regions where trends change first to improve predictionsfor locations with delays. We consider the first omicron wave inGermany at the end of 2021 and compare a heterogeneous GNNusing mobility data with a model without spatial information. Weobserve that, for this period, mobility data significantly improveforecasts and specifically that improvements occur earlier in time.Using GNNs and mobility data enables leveraging information fromcounties affected earlier to improve forecasts for counties affectedlater. We conclude that such performance improvements could betransferred to counties with earlier change points by also includingneighboring nations in the graph structure. Further, we emphasizethe need for systematic contextual evaluation of GNN-based modelsfor forecasting pandemic trends.KEYWORDSmobility data, trend estimation, graph neural networks, covid-19ACM Reference Format:Simon Witzke, Noel Danz, Katharina Baum, and Bernhard Y. Renard. 2023.Mobility data improve forecasting of COVID-19 incidence trends usingGraph Neural Networks (Extended Abstract). In epiDAMIK 2023: 6th epi-DAMIK ACM SIGKDD International Workshop on Epidemiology meets DataMining and Knowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 5 pages.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).1 INTRODUCTIONSpreading from Wuhan, China, in late 2019, the COVID-19 pan-demic has held humanity in its grasp until recently [35]. The pan-demic has had drastic consequences, with estimates of almost fifteenmillion excess deaths only in 2020 and 2021 [20] and considerableeconomic and social damages [5]. The global scale of the pandemicled to large amounts of data on different modalities related to epi-demic spread being shared, such as mobility and sequencing data.These have been made available to support the development offorecasting methods intended to inform decision makers concern-ing potential interventions [21, 23]. Human mobility is a centraldriver in the geographical spread of epidemics caused by air-bornediseases [3], enabling the virus to travel between regions and, inthe case of COVID-19, rapidly infecting most of the world. Dur-ing the pandemic, researchers have combined mobility networkswith mechanistic models to understand the influences of changedmobility behavior and further highlight its importance for the pan-demic’s development [4, 30]. Schlosser et al.[30] have shown thatlockdowns strongly impacted mobility structures during the firstCOVID-19 wave in Germany and that the associated reduction inmobility can slow the virus’ geographical spread.Various spatio-temporal approaches using Recurrent Neural Net-works and EXtreme Gradient Boosting have been proposed to fore-cast county-level COVID-19 metrics [11, 18, 22, 34]. However, recentadvances in deep graph learning have led to Graph Neural Networks(GNNs) gaining popularity in domains as diverse as traffic forecast-ing [12] or computational chemistry [26]. Human mobility betweengeographical regions can naturally be represented as graphs, wherenodes represent locations, such as counties, and edges movementsbetween them. Consequently, numerous approaches that try lever-aging the power of GNNs to forecast COVID-19-related metrics,such as cases, deaths, and hospitalizations, have been proposed[9, 10, 13, 24]. These approaches have shown promising results inproviding insights into the short-term development of the COVID-19 pandemic. However, informing decision makers about a trendforecast rather than exact numbers might be more beneficial. Com-municating trends can be easier than directly communicating casesor deaths. Trends are strong indicators of relevant changes in theepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.pandemic development and a need for interventions, and their in-terpretation is straightforward. For example, the US Governmentused a 14-day downward trend in COVID-19 cases as a conditionfor potential re-openings [6]. For this purpose, systematically eval-uating GNN-based methods’ ability to correctly forecast trends isessential. Accurate forecasts are especially relevant for phases withchange points, where locations successively experience a changein their trend, such as the peak of a wave.There are secondary time series modalities, such as Googlesearch trends and smart body temperature sensors. These modali-ties potentially reflect changes in trends faster than case numbers.This has been successfully leveraged by Kogan et al.[15] and Stol-erman et al.[31] to develop early-warning systems in the UnitedStates that detect such trend signals up to weeks in advance. Simi-larly, GNNs may utilize nodes with leading time series to improveforecasts for nodes with lagging time series by passing informationvia the underlying graph, i.e., information from locations wherechanges occur earlier might be beneficial for forecasting locationswhere similar changes are delayed.In this work, we investigate whether mobility data can improveforecasts of 14-day linear trends of the COVID-19 incidence. Weevaluate county-level forecasts of a heterogeneous GNN for loca-tions experiencing a change point during the second half of the firstomicron wave at the end of 2021 in Germany [19], where cases arebeginning to decline. We further analyze whether our GNN can uti-lize information from counties with leading changes for forecastingcounties that experience similar changes later. Finally, we discussthe implications for developing and evaluating future GNN-basedmethods for pandemic forecasting.2 MATERIALS AND METHODS2.1 Graph ConstructionInspired by Kapoor et al.[13], we construct heterogeneous spatio-temporal graph samples with distinct edge types for spatial andtemporal connections. We design each graph sample to contain 15weighted mobility subgraphs, representing movements betweenthe 400 German counties as nodes at successive points in time,t−14,...,t . We use spatial edges to express these mobility graphs.The directed but unweighted temporal edges then link each countyat a time point t−14,...,t to its representations on up to sevenprevious days, connecting the spatial components of the graph.Therefore, each graph sample represents a single point in timewhile still including historical information from previous days.We use mobility data [16, 28] to build the spatial edges. The useddataset contains the daily movements of nearly one million mobilephone users in Germany and is non-public due to privacy concerns.The number of mobile phones sending location information variesdaily, so we normalize the movements by the daily device count andthen re-scale all movements with the average daily device count.We find that the daily mobility networks’ adjacency matrices areprimarily symmetric, i.e., the opposing edges are highly similar.Therefore, we convert the directed into undirected graphs by sum-ming the weights of the edges in both directions. Finally, we denoisethe mobility graphs by removing 30% of the non-zero edges withthe lowest edge weights, where edges on the thresholding boundaryare removed randomly.The node features of our graph consist of dynamic and staticfeatures. We obtain data on the COVID-19 case numbers startingin January 2020 from the Robert Koch Institute [27] and aggregatethe data on the county level, resulting in a total of 400 time series.Countering reporting inaccuracies, we calculate the county-level7-day incidence, a right-aligned 7-day moving sum normalized bythe county population and then scaled by 100,000. Each node attimethas the 7-day incidence of the previous seven days until dayt−6as node features. Additionally, we include a cyclical sine/cosineencoding [33] for the weekday and month. This cyclical encodingaims to improve the learning of short and long-term seasonal effects.Lastly, we use the population density of each county as the onlystatic feature. We collect the census data, such as population sizeand population density, from the German Federal Office of Statistics[17].As prediction targets, we use 14-day trends in the COVID-19incidence obtained from linear approximations. A linear approxi-mation has the advantage that it allows us to estimate the strengthof a trend and not only its direction compared to converting theproblem to a classification task. For this purpose, we smooth the7-day incidence time series for the whole dataset to remove remain-ing artifacts, using a center-aligned 7-day moving average. For eachcounty and time point t, we perform a linear regression on thissmoothed time series with the known time series values at timepointst+1,...,t+14as the dependent variable and the number ofdays from time tinto the future h∈1,...,14as the independentvariable. We then use the slope of this regression, representing alinear trend of the COVID-19 incidence over the next 14 days fromtime pointt, as the ground truth for our forecasts.2.2 Graph Neural NetworkOur GNN is similar to the network used by Kapoor et al.[13] andbased on Kipf and Welling’s[14] graph convolutional layer. Weextend this architecture by using relational graph convolutionallayers (R-GCN), an extension for heterogeneous graphs proposed bySchlichtkrull et al.[29] that allows feature updates via multiple edgetypes, where each edge type has its own set of learned parameters.First, the node features are passed through an initial encoding layerfollowed by a dropout with a probability of 0.2. Next is a three-layer GNN, each with a dropout probability of 0.5. Like Kapooret al.[13], we add skip-connections and concatenate the output ofthe initial encoding layer to the output of each R-GCN layer topreserve local information and counter over-smoothing. Lastly, weuse a multi-layer perceptron with a single hidden layer to producethe final prediction. We note that for each graph sample, we onlyuse the embeddings of the most recent spatial subgraph to obtain asingle forecast for all 400 counties. All layers have 32 hidden unitsand use a ReLU as the non-linear activation function, except forthe last linear layer, which has 16 hidden units. The output layeruses no activation function, allowing positive and negative trendpredictions. We implement our GNN in PyTorch [25] and PyTorchGeometric [7].2.3 Training setupWe use a mean squared error (MSE) regression loss and an ADAMoptimizer with a learning rate of 1.33e−4and weight decay of 1e−5.Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract) epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAWe employ a batch size of 128 and train for a maximum of 250epochs with early stopping, with a patience of 10 epochs withoutimprovement.We adopt a rolling-origin evaluation approach [32] where weextend the training set by the test sample of the previous iteration.We test from November 10, 2021, until December 19, 2021, with allprevious data being used for training and validation. We use alldata from January 15, 2020, for training and validation. Therefore,the training and validation set contains 665 samples for the firsttest sample and grows to 704 samples for the last test sample. Ourvalidation set consists of the day after the last training sample andis used for early stopping and model selection. We always havea 17-day gap between the validation and test samples to avoidinformation leakage to the test sample while also mimicking areal-world situation where we use all the available data to make aforecast.To counter the sparseness of training data and avoid conditioningour model too strongly on periods that contain limited information,such as summer periods with low incidences, we oversample thetraining set by multiplicating specific samples. We combine theglobal German COVID-19 incidence time series with an exponentialfunction, assigning higher importance to more recent dates. Weconvert the result into a discrete probability distribution whereeach sample is assigned a probability. We then draw from thisdistribution with replacement. We use an oversampling rate of 10.2.4 Evaluation ScenarioWhile we train our models using an MSE regression loss, this metricis not optimal for evaluating our models’ performance. Differentcounties experience the considered phase of the pandemic differ-ently and a metric dependent on the range of the trend values couldbias our evaluation.Therefore we evaluate the models’ performance using the MeanAbsolute Percentage Error (MAPE) (Appendix A.1) and the sym-metric Mean Absolute Percentage Error (sMAPE) (Appendix A.2).Further, while MAPE and sMAPE provide insight into the error inthe magnitude of the trend, we are also interested in the model’sability to predict the direction of the trend. For this purpose, weevaluate our models with an adaption of the Mean DirectionalAccuracy (MDA) (Appendix A.3).To investigate if our models can leverage mobility data to im-prove predictions in counties with lagging change points, we con-sider the first omicron wave at the end of 2021, from November 10to December 19. For this period, we extract the date on which eachcounty’s corresponding smoothed COVID-19 7-day incidence timeseries has its maximum, i.e., its peak. We consider this the pointwhen the trend will likely change from positive to negative as theincidence begins to decline.After obtaining the peak for each county, we use a 7-day movingwindow to evaluate how the prediction performance develops asmore counties reach their peak. For each window, we collect allcounties that have their peak inside the current window. We thencompute all metrics for these counties using the forecast and groundtruth of their peak date and shift the window by one day.We conduct additional experiments with the same evaluationsetup but replace the adjacency matrices of the mobility subgraphswith identity matrices to verify that difference in performance canbe accounted to the mobility data. Thus, we train models with thesame number of parameters but do not include spatial information.3 RESULTSFor all experiments, there is a clear performance improvement asmore counties reach their peak over time that is consistent acrossall metrics. This improvement is more pronounced for models withmobility data than those without spatial information (see Figure 1).To verify that our findings that models with mobility data performbetter than models without spatial information are significant, weconduct paired one-tail Wilcoxon signed-rank tests with signifi-cance level α=0.05for all metrics. After correcting for multipletesting using the Benjamini-Hochberg method [1], we find that forMAPE ( p-value≈0.021), sMAPE ( p-value≈2.738e−6), and MDA(p-value≈6.661e−6) the mobility-conditioned models significantlyoutperform the models without spatial information.Nov 15 Dec 01 Dec 150.000.250.500.751.00sMAPEModels with mobility data Models without spatial informationA0.000.250.500.751.00Nov 15 Dec 01 Dec 15Date (2021)MDABFigure 1: (A) sMAPE (lower is better) for peaks in 7-day mov-ing windows. The performance improves over time for bothexperiments before declining. The effect occurs earlier and isgreater for models with mobility data. (B) The MDA (higheris better) almost mirrors the sMAPE’s behavior. This suggeststhat while more recent training data improve predictions,this effect is amplified by mobility data.Figure 1 (A, B) clearly shows that the improvements in sMAPEand MDA happen earlier and are more extreme for the modelswith mobility data. This difference indicates that the improvementscannot solely be attributed to the fact that the models have seenmore recent and relevant data and are therefore conditioned better.Furthermore, due to the 17-day gap to avoid information leakage,the model is unlikely to have seen any recent negative trends for acounty before its peak during training. However, as earlier coun-ties are already past their peak and are experiencing decreasingepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.incidences, they can share this information with counties wherepeaks occur later.4 DISCUSSION AND CONCLUSIONWe find that mobility data significantly improve forecasting perfor-mance compared to experiments without spatial information. Wehave two hypotheses for our observations. Firstly, the structuralinformation in the mobility networks and their variation over timemight lead to improved predictions. Secondly, our GNN model canpick up information from counties that experience changes, suchas beginning downtrends in incidences, earlier and use them forforecasts of counties where these changes occur delayed. With ourcurrent experimental setup, we are unable to disentangle these hy-potheses. However, further experiments, for example, using staticspatial connections, could provide insights.Counties that are the first to experience a change in trend seemunable to benefit from mobility data. However, these counties mightbe of the highest interest as changes occur earlier and are likelymore vital indicators of the need for interventions. Therefore itcould be valuable to include additional nodes representing neighbor-ing nations in our graph to leverage potentially leading informationfrom them.Our analysis suggests that systematically analyzing models’ ca-pabilities of making accurate trend forecasts during times of interestis highly valuable. Different components, such as the magnitudeand direction of a trend, are relevant for providing a holistic un-derstanding in an epidemiological context. It could be helpful toextend evaluations by applying post-hoc explainability methodsfor graph-based models to understand better how the models maketheir predictions. Such explanations could provide insights for epi-demiologists to construct hypotheses regarding the pandemic’scurrent state and spreading behavior.We showed the capabilities of a heterogeneous spatio-temporalGNN in leveraging mobility data to improve forecasts for countieswith lagging time series directly after a change in trend. We suggestthat including more global information via nodes representing othernations could extend this effect to leading counties where changesoccur first. Currently, we evaluate single rolling-origin evaluationexperiments for the change point of the COVID-19 pandemic inGermany. To substantiate our findings, we will consider differentphases of the pandemic, including change points with a switch toupward trends. Furthermore, we will run experiments repeatedlyto verify the robustness of our results and establish confidencebounds.ACKNOWLEDGMENTSThis work was supported by the German BMWK through the DAKI-FWS project [01MK21009E to B.Y.R.].
16OQ4WdOCPS
A Comprehensive Review of a COVID-19 Forecasting Paper: Promising Approach with Room for Clarification and Improvement
3: Marginally above acceptance threshold
Overall, the paper is well-written and provides valuable insights into the potential benefits of using mobility data and GNNs for COVID-19 forecasting. While the paper's quality is commendable, with a clear research objective and a well-defined methodology, the evaluation is lacking and somethings need better explaining (detailed in the list below). The paper's clarity is also good but needs improvement in a few areas. The significance of this work is evident in its potential to improve pandemic forecasting and inform decision-making but the originality of leveraging mobility data to enhance predictions is not something new. Pros: Model architecture and training setup: The paper mentions that the model architecture is well chosen and the training setup and evaluation metrics are appropriate. These aspects are positively reviewed. Preprocessing: The paper's preprocessing steps of smoothing the 7-day incidence data using a center-aligned 7-day moving average (and normalizing where necessary) is considered a good approach to enhance the quality of the data and facilitate more accurate trend analysis. Cons: Accuracy of predicted trends: Using linear regression to predict the 14-day trends raises questions about the accuracy of these predicted trends themselves. It might be more appropriate to linearly approximate the ground truth data when evaluating the model's performance since that is the target being predicted. Evaluation scenario: The evaluation scenario seems to be lacking since it only considers a one-month period. There is confusion about how the peak is determined using a 7-day moving window. If the window is limited to 7 days, there will always be a peak within that period since the paper mentions "We extract the date on which each county’s corresponding smoothed COVID-19 7-day incidence time series has its maximum, i.e., its peak". If it is simply the peak of ALL 7day incidence time series for each county then it is more than reasonable, if not then further clarification is needed to understand the evaluation process. Training data size: The paper does not clearly state the size of the training data. It mentions considering the 2nd half of the first omicron wave from November 10 to December 19, which implies two months of data. However, it is unclear when the summer period is included in the training data. Oversampling: The necessity of oversampling is questioned, especially when predicting 14-day trends since low incidence should not have effect on trends. It is also hard to determine the negative effects of this since the training data's size is not mentioned. If ALL previous periods are used for training, there is a concern that discrete oversampling may not be suitable as different waves of variants can behave differently due to varying growth rates. While if only the brief period (1st half of the Omicron wave is used), it can potentially lead to overfitting due to a small amount of data, this is especially the case when the model is evaluated during a wave of a different variant. Testing on declining trends only: It is questioned why the evaluation is conducted only on a period where the trend is declining. The evaluation should consider periods of both increasing and decreasing trends to provide a comprehensive assessment of the model's performance. In conclusion, while the paper presents some positive aspects, including the model architecture, training setup, and evaluation metrics, there are significant concerns and areas that require clarification and justification. The questionable use of predicted trends as target data, the unclear evaluation scenario and peak determination, and the potential issues with oversampling can raise some doubts about the methodology. It is also not clearly discussed how the GNN can utilize information from counties with leading changes for forecasting counties that experience similar changes later. For future work, it would also be good to provide comparisons to other non GNN related models (even something as simple as the linear regression being done). Further elaboration and addressing of these concerns would greatly improve the quality and clarity of this work. The title of the paper can also potentially be changed since the forecasting being done is of COVID-19 trends rather than incidence.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
n1ODM_LrRs
KDD.org/2023/Workshop/epiDAMIK
2023
Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract)
["Simon Witzke", "Noel Danz", "Katharina Baum", "Bernhard Y Renard"]
The COVID-19 pandemic has had a considerable global impact over the last few years. Many efforts were made to understand and estimate its development. The availability of large amounts of data, including mobility data, has led to numerous Graph Neural Networks (GNN) being proposed to leverage this data and forecast case numbers for the short-term future. However, information about trend developments, especially where trends reverse directions, is crucial in informing decisions. GNNs may be able to use information from regions where trends change first to improve predictions for locations with delays. We consider the first omicron wave in Germany at the end of 2021 and compare a heterogeneous GNN using mobility data with a model without spatial information. We observe that, for this period, mobility data significantly improve forecasts and specifically that improvements occur earlier in time. Using GNNs and mobility data enables leveraging information from counties affected earlier to improve forecasts for counties affected later. We conclude that such performance improvements could be transferred to counties with earlier change points by also including neighboring nations in the graph structure. Further, we emphasize the need for systematic contextual evaluation of GNN-based models for forecasting pandemic trends.
["mobility data", "trend estimation", "graph neural networks", "covid-19"]
ABSTRACTThe COVID-19 pandemic has had a considerable global impactover the last few years. Many efforts were made to understandand estimate its development. The availability of large amounts ofdata, including mobility data, has led to numerous Graph NeuralNetworks (GNN) being proposed to leverage this data and forecastcase numbers for the short-term future. However, information abouttrend developments, especially where trends reverse directions, iscrucial in informing decisions. GNNs may be able to use informationfrom regions where trends change first to improve predictionsfor locations with delays. We consider the first omicron wave inGermany at the end of 2021 and compare a heterogeneous GNNusing mobility data with a model without spatial information. Weobserve that, for this period, mobility data significantly improveforecasts and specifically that improvements occur earlier in time.Using GNNs and mobility data enables leveraging information fromcounties affected earlier to improve forecasts for counties affectedlater. We conclude that such performance improvements could betransferred to counties with earlier change points by also includingneighboring nations in the graph structure. Further, we emphasizethe need for systematic contextual evaluation of GNN-based modelsfor forecasting pandemic trends.KEYWORDSmobility data, trend estimation, graph neural networks, covid-19ACM Reference Format:Simon Witzke, Noel Danz, Katharina Baum, and Bernhard Y. Renard. 2023.Mobility data improve forecasting of COVID-19 incidence trends usingGraph Neural Networks (Extended Abstract). In epiDAMIK 2023: 6th epi-DAMIK ACM SIGKDD International Workshop on Epidemiology meets DataMining and Knowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 5 pages.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).1 INTRODUCTIONSpreading from Wuhan, China, in late 2019, the COVID-19 pan-demic has held humanity in its grasp until recently [35]. The pan-demic has had drastic consequences, with estimates of almost fifteenmillion excess deaths only in 2020 and 2021 [20] and considerableeconomic and social damages [5]. The global scale of the pandemicled to large amounts of data on different modalities related to epi-demic spread being shared, such as mobility and sequencing data.These have been made available to support the development offorecasting methods intended to inform decision makers concern-ing potential interventions [21, 23]. Human mobility is a centraldriver in the geographical spread of epidemics caused by air-bornediseases [3], enabling the virus to travel between regions and, inthe case of COVID-19, rapidly infecting most of the world. Dur-ing the pandemic, researchers have combined mobility networkswith mechanistic models to understand the influences of changedmobility behavior and further highlight its importance for the pan-demic’s development [4, 30]. Schlosser et al.[30] have shown thatlockdowns strongly impacted mobility structures during the firstCOVID-19 wave in Germany and that the associated reduction inmobility can slow the virus’ geographical spread.Various spatio-temporal approaches using Recurrent Neural Net-works and EXtreme Gradient Boosting have been proposed to fore-cast county-level COVID-19 metrics [11, 18, 22, 34]. However, recentadvances in deep graph learning have led to Graph Neural Networks(GNNs) gaining popularity in domains as diverse as traffic forecast-ing [12] or computational chemistry [26]. Human mobility betweengeographical regions can naturally be represented as graphs, wherenodes represent locations, such as counties, and edges movementsbetween them. Consequently, numerous approaches that try lever-aging the power of GNNs to forecast COVID-19-related metrics,such as cases, deaths, and hospitalizations, have been proposed[9, 10, 13, 24]. These approaches have shown promising results inproviding insights into the short-term development of the COVID-19 pandemic. However, informing decision makers about a trendforecast rather than exact numbers might be more beneficial. Com-municating trends can be easier than directly communicating casesor deaths. Trends are strong indicators of relevant changes in theepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.pandemic development and a need for interventions, and their in-terpretation is straightforward. For example, the US Governmentused a 14-day downward trend in COVID-19 cases as a conditionfor potential re-openings [6]. For this purpose, systematically eval-uating GNN-based methods’ ability to correctly forecast trends isessential. Accurate forecasts are especially relevant for phases withchange points, where locations successively experience a changein their trend, such as the peak of a wave.There are secondary time series modalities, such as Googlesearch trends and smart body temperature sensors. These modali-ties potentially reflect changes in trends faster than case numbers.This has been successfully leveraged by Kogan et al.[15] and Stol-erman et al.[31] to develop early-warning systems in the UnitedStates that detect such trend signals up to weeks in advance. Simi-larly, GNNs may utilize nodes with leading time series to improveforecasts for nodes with lagging time series by passing informationvia the underlying graph, i.e., information from locations wherechanges occur earlier might be beneficial for forecasting locationswhere similar changes are delayed.In this work, we investigate whether mobility data can improveforecasts of 14-day linear trends of the COVID-19 incidence. Weevaluate county-level forecasts of a heterogeneous GNN for loca-tions experiencing a change point during the second half of the firstomicron wave at the end of 2021 in Germany [19], where cases arebeginning to decline. We further analyze whether our GNN can uti-lize information from counties with leading changes for forecastingcounties that experience similar changes later. Finally, we discussthe implications for developing and evaluating future GNN-basedmethods for pandemic forecasting.2 MATERIALS AND METHODS2.1 Graph ConstructionInspired by Kapoor et al.[13], we construct heterogeneous spatio-temporal graph samples with distinct edge types for spatial andtemporal connections. We design each graph sample to contain 15weighted mobility subgraphs, representing movements betweenthe 400 German counties as nodes at successive points in time,t−14,...,t . We use spatial edges to express these mobility graphs.The directed but unweighted temporal edges then link each countyat a time point t−14,...,t to its representations on up to sevenprevious days, connecting the spatial components of the graph.Therefore, each graph sample represents a single point in timewhile still including historical information from previous days.We use mobility data [16, 28] to build the spatial edges. The useddataset contains the daily movements of nearly one million mobilephone users in Germany and is non-public due to privacy concerns.The number of mobile phones sending location information variesdaily, so we normalize the movements by the daily device count andthen re-scale all movements with the average daily device count.We find that the daily mobility networks’ adjacency matrices areprimarily symmetric, i.e., the opposing edges are highly similar.Therefore, we convert the directed into undirected graphs by sum-ming the weights of the edges in both directions. Finally, we denoisethe mobility graphs by removing 30% of the non-zero edges withthe lowest edge weights, where edges on the thresholding boundaryare removed randomly.The node features of our graph consist of dynamic and staticfeatures. We obtain data on the COVID-19 case numbers startingin January 2020 from the Robert Koch Institute [27] and aggregatethe data on the county level, resulting in a total of 400 time series.Countering reporting inaccuracies, we calculate the county-level7-day incidence, a right-aligned 7-day moving sum normalized bythe county population and then scaled by 100,000. Each node attimethas the 7-day incidence of the previous seven days until dayt−6as node features. Additionally, we include a cyclical sine/cosineencoding [33] for the weekday and month. This cyclical encodingaims to improve the learning of short and long-term seasonal effects.Lastly, we use the population density of each county as the onlystatic feature. We collect the census data, such as population sizeand population density, from the German Federal Office of Statistics[17].As prediction targets, we use 14-day trends in the COVID-19incidence obtained from linear approximations. A linear approxi-mation has the advantage that it allows us to estimate the strengthof a trend and not only its direction compared to converting theproblem to a classification task. For this purpose, we smooth the7-day incidence time series for the whole dataset to remove remain-ing artifacts, using a center-aligned 7-day moving average. For eachcounty and time point t, we perform a linear regression on thissmoothed time series with the known time series values at timepointst+1,...,t+14as the dependent variable and the number ofdays from time tinto the future h∈1,...,14as the independentvariable. We then use the slope of this regression, representing alinear trend of the COVID-19 incidence over the next 14 days fromtime pointt, as the ground truth for our forecasts.2.2 Graph Neural NetworkOur GNN is similar to the network used by Kapoor et al.[13] andbased on Kipf and Welling’s[14] graph convolutional layer. Weextend this architecture by using relational graph convolutionallayers (R-GCN), an extension for heterogeneous graphs proposed bySchlichtkrull et al.[29] that allows feature updates via multiple edgetypes, where each edge type has its own set of learned parameters.First, the node features are passed through an initial encoding layerfollowed by a dropout with a probability of 0.2. Next is a three-layer GNN, each with a dropout probability of 0.5. Like Kapooret al.[13], we add skip-connections and concatenate the output ofthe initial encoding layer to the output of each R-GCN layer topreserve local information and counter over-smoothing. Lastly, weuse a multi-layer perceptron with a single hidden layer to producethe final prediction. We note that for each graph sample, we onlyuse the embeddings of the most recent spatial subgraph to obtain asingle forecast for all 400 counties. All layers have 32 hidden unitsand use a ReLU as the non-linear activation function, except forthe last linear layer, which has 16 hidden units. The output layeruses no activation function, allowing positive and negative trendpredictions. We implement our GNN in PyTorch [25] and PyTorchGeometric [7].2.3 Training setupWe use a mean squared error (MSE) regression loss and an ADAMoptimizer with a learning rate of 1.33e−4and weight decay of 1e−5.Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract) epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAWe employ a batch size of 128 and train for a maximum of 250epochs with early stopping, with a patience of 10 epochs withoutimprovement.We adopt a rolling-origin evaluation approach [32] where weextend the training set by the test sample of the previous iteration.We test from November 10, 2021, until December 19, 2021, with allprevious data being used for training and validation. We use alldata from January 15, 2020, for training and validation. Therefore,the training and validation set contains 665 samples for the firsttest sample and grows to 704 samples for the last test sample. Ourvalidation set consists of the day after the last training sample andis used for early stopping and model selection. We always havea 17-day gap between the validation and test samples to avoidinformation leakage to the test sample while also mimicking areal-world situation where we use all the available data to make aforecast.To counter the sparseness of training data and avoid conditioningour model too strongly on periods that contain limited information,such as summer periods with low incidences, we oversample thetraining set by multiplicating specific samples. We combine theglobal German COVID-19 incidence time series with an exponentialfunction, assigning higher importance to more recent dates. Weconvert the result into a discrete probability distribution whereeach sample is assigned a probability. We then draw from thisdistribution with replacement. We use an oversampling rate of 10.2.4 Evaluation ScenarioWhile we train our models using an MSE regression loss, this metricis not optimal for evaluating our models’ performance. Differentcounties experience the considered phase of the pandemic differ-ently and a metric dependent on the range of the trend values couldbias our evaluation.Therefore we evaluate the models’ performance using the MeanAbsolute Percentage Error (MAPE) (Appendix A.1) and the sym-metric Mean Absolute Percentage Error (sMAPE) (Appendix A.2).Further, while MAPE and sMAPE provide insight into the error inthe magnitude of the trend, we are also interested in the model’sability to predict the direction of the trend. For this purpose, weevaluate our models with an adaption of the Mean DirectionalAccuracy (MDA) (Appendix A.3).To investigate if our models can leverage mobility data to im-prove predictions in counties with lagging change points, we con-sider the first omicron wave at the end of 2021, from November 10to December 19. For this period, we extract the date on which eachcounty’s corresponding smoothed COVID-19 7-day incidence timeseries has its maximum, i.e., its peak. We consider this the pointwhen the trend will likely change from positive to negative as theincidence begins to decline.After obtaining the peak for each county, we use a 7-day movingwindow to evaluate how the prediction performance develops asmore counties reach their peak. For each window, we collect allcounties that have their peak inside the current window. We thencompute all metrics for these counties using the forecast and groundtruth of their peak date and shift the window by one day.We conduct additional experiments with the same evaluationsetup but replace the adjacency matrices of the mobility subgraphswith identity matrices to verify that difference in performance canbe accounted to the mobility data. Thus, we train models with thesame number of parameters but do not include spatial information.3 RESULTSFor all experiments, there is a clear performance improvement asmore counties reach their peak over time that is consistent acrossall metrics. This improvement is more pronounced for models withmobility data than those without spatial information (see Figure 1).To verify that our findings that models with mobility data performbetter than models without spatial information are significant, weconduct paired one-tail Wilcoxon signed-rank tests with signifi-cance level α=0.05for all metrics. After correcting for multipletesting using the Benjamini-Hochberg method [1], we find that forMAPE ( p-value≈0.021), sMAPE ( p-value≈2.738e−6), and MDA(p-value≈6.661e−6) the mobility-conditioned models significantlyoutperform the models without spatial information.Nov 15 Dec 01 Dec 150.000.250.500.751.00sMAPEModels with mobility data Models without spatial informationA0.000.250.500.751.00Nov 15 Dec 01 Dec 15Date (2021)MDABFigure 1: (A) sMAPE (lower is better) for peaks in 7-day mov-ing windows. The performance improves over time for bothexperiments before declining. The effect occurs earlier and isgreater for models with mobility data. (B) The MDA (higheris better) almost mirrors the sMAPE’s behavior. This suggeststhat while more recent training data improve predictions,this effect is amplified by mobility data.Figure 1 (A, B) clearly shows that the improvements in sMAPEand MDA happen earlier and are more extreme for the modelswith mobility data. This difference indicates that the improvementscannot solely be attributed to the fact that the models have seenmore recent and relevant data and are therefore conditioned better.Furthermore, due to the 17-day gap to avoid information leakage,the model is unlikely to have seen any recent negative trends for acounty before its peak during training. However, as earlier coun-ties are already past their peak and are experiencing decreasingepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.incidences, they can share this information with counties wherepeaks occur later.4 DISCUSSION AND CONCLUSIONWe find that mobility data significantly improve forecasting perfor-mance compared to experiments without spatial information. Wehave two hypotheses for our observations. Firstly, the structuralinformation in the mobility networks and their variation over timemight lead to improved predictions. Secondly, our GNN model canpick up information from counties that experience changes, suchas beginning downtrends in incidences, earlier and use them forforecasts of counties where these changes occur delayed. With ourcurrent experimental setup, we are unable to disentangle these hy-potheses. However, further experiments, for example, using staticspatial connections, could provide insights.Counties that are the first to experience a change in trend seemunable to benefit from mobility data. However, these counties mightbe of the highest interest as changes occur earlier and are likelymore vital indicators of the need for interventions. Therefore itcould be valuable to include additional nodes representing neighbor-ing nations in our graph to leverage potentially leading informationfrom them.Our analysis suggests that systematically analyzing models’ ca-pabilities of making accurate trend forecasts during times of interestis highly valuable. Different components, such as the magnitudeand direction of a trend, are relevant for providing a holistic un-derstanding in an epidemiological context. It could be helpful toextend evaluations by applying post-hoc explainability methodsfor graph-based models to understand better how the models maketheir predictions. Such explanations could provide insights for epi-demiologists to construct hypotheses regarding the pandemic’scurrent state and spreading behavior.We showed the capabilities of a heterogeneous spatio-temporalGNN in leveraging mobility data to improve forecasts for countieswith lagging time series directly after a change in trend. We suggestthat including more global information via nodes representing othernations could extend this effect to leading counties where changesoccur first. Currently, we evaluate single rolling-origin evaluationexperiments for the change point of the COVID-19 pandemic inGermany. To substantiate our findings, we will consider differentphases of the pandemic, including change points with a switch toupward trends. Furthermore, we will run experiments repeatedlyto verify the robustness of our results and establish confidencebounds.ACKNOWLEDGMENTSThis work was supported by the German BMWK through the DAKI-FWS project [01MK21009E to B.Y.R.].
g88vOtuB8R
Mobility data improve forecasting of COVID-19 incidence using Graph Neural Networks
3: Marginally above acceptance threshold
Quality The paper is written in an understandable manner. The experiments are done on (or shown on) a limited set and the authors don’t compare with other baseline methods for case trend analysis of COVID-19 Clarity It can be improved further to make contributions of the authors clearly delineated from the existing literature. The explanation of the graph can be better if it is shown visually, but I understand that there are space limitations for this submission Originality The idea is something that is already explored by other researchers to answer different or similar questions related to the pandemic forecasting Significance The work is significant to the workshop given the problem it is tackling. This is also interesting to society given the possibilities of other epidemics Pros Compare different error metrics and also perform a statistical test to show the significance They show that the addition of mobility is helping in the forecasting; counties with earlier trends can also help predict those with later. Cons No baseline comparison apart from their own method without spatial information Missing citations: 1) mobility network related - Mobility network models of COVID-19 explain inequities and inform reopening; this and follow-up works seem closely connected to what the authors explore 2) Trends are more important than actual numbers in terms of pandemic forecasts 3) Using information from one county to help the other I am aware of works from the CDC forecasting hub, XPRIZE pandemic challenge, etc that have discussed these issues. It would be relevant to cite those papers and show how this work is different
3: The reviewer is fairly confident that the evaluation is correct
J8Gc5acxME
KDD.org/2023/Workshop/epiDAMIK
2023
Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19
["Zhicheng Zhang", "Sonja Neumeister", "Angel Desai", "Maimuna S. Majumder", "Fei Fang"]
The COVID-19 pandemic has emphasized the necessity for effective tools to monitor and predict epidemiological trends. Traditional approaches to disease surveillance possess certain limitations, leading to the emergence of wastewater-based epidemiology (WBE) as a complementary approach. WBE has demonstrated a strong correlation with traditional epidemiological indicators (e.g., number of clinical cases and hospitalization), which makes it a valuable asset in informing public health decision-making processes. Despite the promising prospects of WBE, it faces certain challenges, including restricted data accessibility, geographical bias in data coverage, high data noise levels, and significant data distribution shifts. In this study, we examine the feasibility of utilizing exclusively two publicly available data, specifically aggregated wastewater data and reported case counts, for epidemiological forecasting in the COVID-19 pandemic. We incorporate a variety of statistical and machine learning models in an attempt to address the inherent volatility and bias of the data. We further introduce the usage of the segmentation method during the evaluation phase as a better evaluation metric. Our empirical results show that, even with limited data, performing epidemiological forecasting is possible, and its performance is comparable with methods that use more diverse data sources, suggesting its potential for broader health applications. Additionally, we utilize the insights from results on the length of the forecasting horizon to provide practical guidelines regarding real-world prediction.
["COVID-19", "Disease Surveillance", "Wastewater-Based Epidemiology", "Time-Series Forecasting"]
ABSTRACTThe COVID-19 pandemic has emphasized the necessity for effectivetools to monitor and predict epidemiological trends. Traditionalapproaches to disease surveillance possess certain limitations, lead-ing to the emergence of wastewater-based epidemiology (WBE) asa complementary approach. WBE has demonstrated a strong cor-relation with traditional epidemiological indicators (e.g., numberof clinical cases and hospitalization), which makes it a valuableasset in informing public health decision-making processes. De-spite the promising prospects of WBE, it faces two main challenges,restricted data accessibility and high intrinsic noise and distributionshift in the data. In this study, we examine the feasibility of utiliz-ing exclusively two publicly available data, specifically aggregatedwastewater data and reported case counts, for epidemiological fore-casting in the COVID-19 pandemic. We incorporate a variety ofstatistical and machine learning models in an attempt to addressthe inherent volatility and bias of the data. We further introduce theusage of the segmentation method during the evaluation phase as abetter evaluation metric. Our empirical results show that, even withlimited data, performing epidemiological forecasting is possible,and its performance is comparable with methods that use morediverse data sources, suggesting its potential for broader healthapplications. Additionally, we utilize the insights from results onthe length of the forecasting horizon to provide practical guidelinesregarding real-world prediction.KEYWORDSCOVID-19, Disease Surveillance, Wastewater-Based Epidemiology,Time-Series ForecastingACM Reference Format:Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna ShahnazMajumder, and Fei Fang . 2023. Unlocking the Potential of Public Datasets:Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).Wastewater-Based Epidemiological Forecasting During COVID-19. In epi-DAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop on Epi-demiology meets Data Mining and Knowledge Discovery, August 7, 2023, LongBeach, CA, USA. ACM, New York, NY, USA, 8 pages.1 INTRODUCTIONThe COVID-19 pandemic has emphasized the importance of reliabletools for monitoring and forecasting epidemiological trends. Tradi-tional disease surveillance approaches, based on clinical data, havelimitations in both timeliness and coverage. Wastewater-based epi-demiology (WBE) has thus emerged as a complementary approachto track the spread of infectious diseases in communities [ 8]. WBEhas demonstrated significant potential in the monitoring and fore-casting of epidemics, particularly during the COVID-19 pandemic.Several studies have utilized wastewater data to forecast clinicalcases, hospitalizations, and ICU admissions, as well as to evaluatethe effectiveness of governmental policies in containing COVID-19 transmission [ 10,12,13,27]. Studies have found a strong linkbetween data from wastewater surveillance and disease indica-tors. This link can help make better health decisions, use resourceswisely, and put interventions in place quickly.However, despite the promising results of WBE, there are twomain challenges that need to be addressed for broader practicalapplications, which haven’t been thoroughly explored in the ex-isting literature. First, current approaches in using WBE mainlyrely on small-scale, privately collected data, such as those fromuniversity campuses [ 36], or inaccessible private-sector wastew-ater data [ 10,12]. Often, methods supplement wastewater datawith additional data sources, including Community VulnerabilityIndex (CCVI) and vaccination records [ 13]. In a broader context,the sharing of wastewater data is restricted, and its coverage isgeographically skewed towards economically developed areas thathave a greater number of wastewater monitoring facilities [ 18,23].Second, the real-world epidemiological data is inherently noisydue to various factors such as sampling errors and challenges inattributing causes [ 24]. This issue is further exacerbated duringglobal pandemics like COVID-19, where the temporal correlationswithin the data can drastically shift over the course of the pandemic,undermining the accuracy of predictions. Such drastic shifts canoccur when a new variant emerges and rapidly becomes dominantor when vaccination rates significantly increase, both of whichepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fangcause distinct changes in epidemiological trends. These shifts un-derscore the need for robust forecasting models capable of adaptingto evolving pandemic dynamics.In this study, we focus on two publicly available datasets: ag-gregated wastewater data and reported case counts, both at thecountry level. This selection of datasets is driven by the ready ac-cessibility and reliability of these data sources: wastewater data isregularly published not only by the CDC’s National WastewaterSurveillance System (NWSS) but also by other agencies adhering toCDC protocols, while case count numbers are widely reported. Thiswidespread adoption of consistent data-gathering protocols ensuresthe broad availability and comparability of these datasets. It alsoaims to alleviate volatility and mitigate biases inherent in smalleror less developed regions. The COVID-19 pandemic’s landscapehas been constantly changing, influencing how we assess its spreadand impact. Initially, the case count data, encompassing both severeand mild cases, offered valuable insight into the pandemic’s trajec-tory. This metric was particularly comprehensive during periods ofwidespread testing and reporting. However, as the pandemic hasprogressed, testing methods and reporting practices have evolved,with an increase in home testing and a decrease in reports to gov-ernmental agencies. While these changes present challenges, casecount still servers as a strong signal of disease prevalence. Our coreobjective here is to investigate the feasibility of using only thesetwo publicly available data sources, case counts and wastewaterdata, for epidemiological forecasting.To evaluate this feasibility, we model the problem as a time-seriesforecasting problem characterized by significant distribution shiftsin the data over time. We employ data preprocessing techniques tomanage misaligned time-series data and introduce a segmentationalgorithm during the evaluation phase to account for temporalshifts. This segmentation method enhances evaluation accuracy byensuring that the test data spans only one wave so that the test errorwould no longer be masked by the results in other waves, and weempirically evaluate it to be a better evaluation criterion. To balanceinterpretability, simplicity, and prediction accuracy, we implementa variety of statistical and machine learning models, includinglinear regression, ARIMAX, Gaussian Process Regression, multi-layer perceptron (MLP), and Long Short-Term Memory (LSTM)networks. The diversity of these modeling techniques enables us tocompare the efficiency of simpler models with their more complex,deep-learning counterparts. Finally, our analysis shows that byonly using aggregated wastewater data and reported case counts,we can achieve comparable performance with a random-forestmodel trained on diverse data sources, including CCVI indexes, andvaccination records in [ 13]. We further empirically demonstratethat the segmentation method provides a more accurate evaluation,particularly during volatile periods such as the case count peak inearly 2022. Based on the empirical results on the effect of forecastinghorizon of different lengths, we provide a practical recommendationfor selecting the forecasting horizon in order to optimize the balancebetween reaction time and prediction accuracy.2 RELATED WORKWastewater-based epidemiology. Wastewater-based epidemiol-ogy (WBE) has become an important tool for monitoring and fore-casting epidemiological trends over the past two decades [ 8]. Dur-ing the recent outbreak of COVID-19 [ 6], wastewater data wasused to forecast clinical cases, hospitalizations, and ICU admis-sions, as well as to evaluate the effectiveness of governmental poli-cies [ 10,12,12,13,27]. Galani et al . [10] , Kaplan et al . [12] , Stephenset al. [27] measured the wastewater for a number of monitoringsites and empirically demonstrated a strong correlation betweenhospitalizations and wastewater surveillance data using regressionmodels. Kaplan et al . [12] used wastewater data to estimate repro-ductive numbers. Li et al . [13] used data from 100USA counties topredict hospital and ICU admission numbers using random forestmodels.However, despite its effectiveness in predicting epidemiologicaltrends, wastewater data were not widely shared with the public oraccessible to researchers, making it infeasible to perform additionalanalyses [ 18]. Current works often rely on small-scale, privatelycollected dataset [ 36], or supplement the dataset with other diversesources of data, like vaccination records and CCVI indexes [ 13].In addition, the coverage of wastewater data is severely biasedtoward economically more developed geographic regions with morewastewater monitoring facilities [ 18,23]. In an attempt to addressthese challenges, our approach differs from previous work in that weaim to assess the promise of using exclusively two publicly availabledata sources: aggregated wastewater data and the reported casecount data that are easily accessible to the public for epidemiologicalforecasting. Specifically, we focus on data within the United Stateswhile averaging it across the country to minimize bias in wastewaterdata from smaller or less-developed counties and states.Time-series forecasting. Time series forecasting has been a long-standing problem in the fields of statistics and machine learning, at-tracting significant research attention. Classical methods [ 3,16] pro-vide a comprehensive understanding of time series analysis and fore-casting and offer both theoretical insights and statistical guarantees.The advent of deep learning-based methods, particularly recurrentnetworks, has substantially improved the ability to capture temporalcorrelations in training data, as demonstrated by works including re-current neural networks (RNNs) [ 22] and long short-term memory(LSTM) networks [ 11]. In recent years, long-term series forecasting(LSTF) research has focused on transformer-based models [ 30] dueto their remarkable success in various application domains, suchas natural language processing (NLP) [ 20] and computer vision(CV) [ 15]. Transformer-based LSTF models [ 14,32,34,37,38] havedemonstrated impressive forecasting performance while also priori-tizing prediction efficiency. However, recent criticism by Zeng et al .[35] suggests that the self-attention mechanism in transformersinevitably leads to temporal information loss, and their empiricalresults indicate that these models may not even outperform simpleone-layer linear models in certain experiments.In the domain of time series forecasting with scarce data, deeplearning models frequently adopt less complicated architectures toenhance model performance. Tsaur [29] employed fuzzy grey re-gression models, while Abdulmajeed et al . [1] utilized an ensembleUnlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAof several auto-regressive models to improve accuracy and robust-ness in predicting COVID-19 cases in Nigeria. Informed by theseinsights, our approach emphasizes the use of simpler and moreinterpretable models when working with limited wastewater andcase count data aggregated across the country. Specifically, we em-ployed linear regression models, ARIMAX models, and Gaussianprocess regression models with a combination of kernels to addressthe problem of noise in the data. Additionally, we conducted a com-parative analysis with deep learning models, including multi-layerperceptron (MLP) and LSTM models, to evaluate the effectivenessof our chosen methodology in the context of limited data.3 PRELIMINARIESTime-series forecasting. The primary objective of time-series fore-casting [ 19,25] is to make accurate predictions of future values in asequence, utilizing historical observations as a basis. Consider a setof observed data points x1,..., xt, where xi∈X, the aim is to fore-cast the corresponding labels y1,..., ytfor each timestep, rangingfrom 1tot, with yi∈Y. Lethrepresent the look-back window size;when predicting the label yi, the prediction model can take as inputH={xi−h+1,..., xi}orH={xi−h+1,..., xi,yi−h+1,..., yi−1}.This constraint ensures that predictions rely solely on informationavailable within the specified historical context.Wastewater-based Epidemiology. Wastewater-based epidemiol-ogy (WBE) is an approach to public health surveillance that lever-ages the detection of biological or chemical markers present insewage to reflect the health status of a region [ 21]. In the case ofCOVID-19, the wastewater data measures genetic fragments ofSARS-CoV-2 virus excreted in stool, specifically targeting the N1and N2 regions of the nucleocapsid gene, to determine COVID-19concentrations.4 METHODIn this section, we detail our data preprocessing steps, modelingtechniques, and evaluation methods. Our focus of the trainingmethod lies in aligning misaligned time-series data, computinginput embeddings, and employing models that strike a balancebetween simplicity, interpretability, and predictive accuracy. Wealso introduce a wave-based segmentation approach for evaluation,arguing its effectiveness as a more accurate metric and discussingits calibration using expert-identified waves.4.1 Data ProcessingTo ensure the quality and consistency of the data used for train-ing and evaluation, we first address the challenge of misalignedtime series data and then segment the data into waves based onthe observed distribution shifts. These preprocessing steps aim toimprove the model’s reliability and adaptability to changes in theunderlying data distribution over time.4.1.1 Handling Misaligned Time-Series Data. Dealing with incon-sistent time intervals or irregular timestamps in time-series fore-casting is a common challenge. In our study, the primary issuearises from weekly updates of wastewater data ( xi) and the dailyupdates of case count data ( yi). There are two main strategies toaddress this: removing data points without corresponding labels orutilizing all available data, for instance, through interpolation [ 31].Our approach is to associate each element xtin the wastewaterdatasetXwith all elements that fall within the interval betweentwo successive wastewater data updates. Specifically, for each xtinthe datasetX, we define:xt={xt}∪{yi|Txt−1<Tyi<Tyt} (1)whereTxdenotes the timestamp of the event x, andytis treatedas the ground truth label. The augmented xtnow includes thewastewater data point at time tand all case count data pointswhose timestamps Tyiare strictly greater than the timestamp Txt−1of the preceding wastewater data point and strictly less than thetimestampTxtof the current wastewater data point. The reasonbehind this decision is to maximize data utilization. However, itmay not always reflect real-world scenarios, where all data mightnot be up-to-date, or future trends a few days from now need to bepredicted. We empirically evaluate the impact of such delays whendoing forecasting in Section 5.5.4.1.2 Embedding of input data. As shown in Figure 1, there existsa lead-lag relationship [ 4,13] between the wastewater data andthe case count data. Specifically, signals in the wastewater dataoften precede signals in the case count data by a span of severaldays or weeks. To accommodate this time-shifted relationship, weimplement a sliding window approach for both the wastewater andcase count data inputs.Formally, for a selected time point i, and a window size hwforwastewater data and hcfor case count data, we generate inputsequencesXwastewateriandXcasecountirespectively, as:Xwastewateri=[wi−hw,...,wi−lw]Xcasecounti=[ci−hc,...,ci−lc],(2)wherewjdenotes the wastewater data and cjdenotes the casecount data at time j.lcandlware used to simulate the informationavailable at the time of prediction in the real-world. lw=lc=1means that the prediction model is given all the data up-to-date.To maintain scale consistency across all data points, we nor-malize the case count data using a min-max scaler, deriving thescaling parameters from historical data. This process ensures thedata maintains its inherent trend and distribution characteristicswhile being compatible with the model input, especially the deeplearning models.4.2 Modeling Techniques for Time-series DataIn the context of limited data, the ideal model to capture tempo-ral correlations should balance simplicity, interpretability, and alower parameter count. More complex models, while potentiallyimproving performance, might overfit the data and compromiseinterpretability and deployability. Therefore, in this study, our em-phasis is on methodologies that ensure adequate predictive accuracywhile maintaining computational feasibility and transparency ininterpreting data patterns.(1)Linear Regression Model [ 17]: Used as a benchmark, thissimple model provides a baseline for performance compari-son.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fang(2)ARIMAX Model [ 2]: Serving as a robust statistical model,ARIMAX extends the traditional ARIMA model by incorpo-rating exogenous inputs, which helps in modeling complextemporal structures in the presence of influential externalfactors, which suits our dataset with a lead-lag relationship.(3)Gaussian Process Regression (GPR) Model: This model lever-ages a custom kernel for handling non-linear relationshipsand noisy data. Our kernel construction, formulated as below,involves a multiplicative interaction of Constant and RBF ker-nels, along with an additive incorporation of a White kernelfor noise management and a Matern kernel for smoothness.(4)Multi-layer perceptron (MLP): A widely employed neuralnetwork for regression problems, our implementation fea-tures two hidden layers with 128 units each and ReLU as theactivation function.(5)Long Short-Term Memory (LSTM) model [ 11]: As a type ofrecurrent neural network, LSTMs are capable of capturingtemporal dependencies in data, making them well-suited fortime series forecasting tasks. LSTMs can learn to filter outnoise by selectively retaining valuable information throughgating mechanisms. To mitigate overfitting, we incorporatea dropout [ 26] rate of 0.5 after each layer in the model andadded anL2regularization.4.3 Wave-based SegmentationOne important observation for pandemic-related data is the dy-namic nature of the underlying distribution over time. This variabil-ity can be attributed to several factors, including the emergence ofdifferent viral variants [ 5], changes in vaccination status among thepopulation [ 7], and the implementation of varied government poli-cies [ 33]. The presence of these distribution shifts significantly com-plicates the prediction process. To address this issue, we proposesplitting the data into waves, where each wave is assumed to havea relatively stable distribution. We employ Binary Change Point De-tection [ 9] for identifying time-series data change points, chosen forits multiple change point detection, no predetermined change pointrequirement, and computationally efficient O(Cnlogn)complexity.4.3.1 Hyperparameter Calibration. Once the waves are identified,we calibrate the model’s hyperparameters, including the cost func-tion, penalty term, and minimal distance between two changepoints, to fit the waves recognized by domain experts. We for-mulate a scoring function and select the optimal hyperparame-ters on the validation data. Given a set of detected change pointsCP={cp1,cp2,...,cpn}and a set of expert-identified waves W={w1,w2,...,wm}, we define a score function asS(CP,W,α,β)=m∑︁i=1exp(−αd(wi,CP))−β|n−m|,(3)whereαis the decay factor for the impact of the distance betweenthe detected change points and the actual waves, βis the penalty co-efficient that penalizes the absolute difference between the numberof detected waves and the number of actual waves, d(wi,CP)de-notes the closest distance between wave wiand the set of detectedchange points in CP. The objective is to find hyperparameters thatminimize this score:CP★=arg minα,βS(CP,W,α,β). (4)Minimizing this metric allows us to select the hyperparametersthat optimally align the detected change points with the expert-identified waves while balancing proximity and the penalty for thedifference in the number of change points and waves.4.3.2 Evaluation using Wave-based Segmentation. Our approachleverages wave-based segmentation for evaluation. Once we sepa-rate our dataset Dinto training, Dtrain, and testing sets, Dtest, we re-strict the test data to have just one segment. Mathematically, if Stestrepresents all segments in Dtest, we would ensure that |Stest|=1.This methodology mirrors real-world conditions more accurately,as predicting data of new waves often requires substantial additionalinformation. We avoid using wave-based segmentation in trainingdue to potential data leakage issues, as it commonly uses globaldata to determine segmentation, which could inadvertently affectthe results.5 EXPERIMENTSIn this section, we outline the experimental setup, including datavisualization and segmentation results, and present the empiricalresults obtained by evaluating the five models for the task of pre-dicting case counts.5.1 Experimental SetupOur experiments exclusively use publicly available data, namelywastewater data1and case count data2, count and death which areoriginally aggregated at the county or state level and therefore, poseinherent challenges due to their noisy nature. The case count dataserve as ground truth for our prediction task. Owing to variabilityin the collection of country/state-level data, we aggregate all data atthe national level and utilize the nationwide average for our analysis.Composed of wastewater data and case count data, our datasetspans from January 15, 2020, to February 15, 2023. Wastewater datais reported on a weekly basis (162 data points), while case countdata are collected daily (1128 data points). For all the experiments,we report the mean and standard deviation of 6runs.To better understand the correlation between wastewater dataand the case counts, we visualize the trends in the data in Figure 1.We aggregate the data at the national level due to the high variabil-ity and statistical noise inherent in the state-wise data, as evidencedin Figure 1(b). As shown in Figure 1 with the shifted wastewatercurve, a strong association exists between the trend of virus con-centration levels in wastewater and that of the number of cases,with wastewater data trends slightly preceding that of case counts.However, it is important to underscore that despite the exhibitedassociation between the two trends, the relationship between theirabsolute numbers is not straightforward.1https://github.com/biobotanalytics/covid19-wastewater-data2https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Aggregated trend of the nation(b) Trend in Georgia and MississippiFigure 1: Temporal Correlation between Wastewater ViralConcentrations and Case Counts per 100k population. Thex-axis shows the dates ranging from 2020-01-15 to 2023-02-15, and the y-axis denotes the values of the viral wastewaterconcentrations and the number of cases per 100k population.Subfigure (a) describes the aggregated trend of the nation,and (b) describes two randomly picked states of Georgia andMississippi.5.2 Visualization of Segmentation ResultAfter calibrating the hyperparameters on the expert-identifiedwaves from March 2020 to February 2022 [ 28], we use the BinaryChange Point algorithm [ 9] to detect the change points in thewastewater virus concentration level data. In our case, the expertdata segmentation consists of five points, forming six distinct waves.As a result, we opted to include all of these points for the calcula-tion of the score function during the calibration process. Figure 2demonstrates that the detected change points closely align withthe expert-identified waves and that our method can accuratelydetect change points even in areas not covered by the expert datasegmentation.5.3 Evaluation across Varied End DatesTo assess the accuracy of our models, we evaluate their performancethroughout the course of the pandemic. Figure 3 represents theNormalized Root Mean Square Error (NRMSE) of each model overthe different end dates, allowing for a comparative analysis of modelconsistency and adaptability across time. We compare our resultsFigure 2: Segmentation results using Binary Change PointDetection. The green dotted lines represent expert-identifiedchange points, while the red dotted lines indicate our de-tected change points. The x-axis denotes the days passedsince 2020-01-15, and the y-axis shows the viral wastewaterconcentration level. Our model’s detected change points ex-hibit close correspondence with expert-identified points.with a random forest model developed by Li et al . [13] . Their modelwas trained on diverse data, including hospitalization and ICUadmission records, CCVI indexes, and vaccination records, amongothers. Notably, their work does not clearly delineate the date rangefor the test data—a factor that could significantly impact the model’saccuracy.Figure 3 shows that the models perform relatively poorly inthe early stages of the pandemic but improve significantly in thelater stages, even during a sudden peak in early 2022. In the laterstages of the pandemic (after July 2021), as shown in Figure 3, allfive models reach performance on par with the baseline model,indicating an NRMSE below 1.0. This suggests that, on average,the model’s prediction error is less than the standard deviation ofthe observed data, which is over 200cases during the peak. Theperformance at the early stages is worse, possibly due to the lackof sufficient data to learn the inherent temporal correlation.5.4 Impact of Segmentation on EvaluationIn addition to evaluating the performance on different dates, wealso conduct an experiment to understand how wave segmentationimpacts the evaluation of our models. Figure 4 shows model perfor-mance with and without segmentation for the models. Performancedifferences are more noticeable during peak periods, likely due torapid trend shifts that make the prediction task difficult.We remark that this experiment highlights the importance ofsegmentation in this task of predicting case counts, particularlyduring volatile periods. The omission of this segmentation method,as is the case in [ 13], could lead to inaccuracies in the NormalizedRoot Mean Square Error (NRMSE) as multiple waves in the testdata may mask inaccuracies with one particular wave. Therefore,we present the results with the segmentation evaluation methodfor all subsequent experiments. It is also worth noting that theseepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei FangFigure 3: Performance comparison of models across end dates.The x-axis denotes the end date of the test period, while they-axis represents the normalized root mean square error(NRMSE) of the prediction for the number of cases. The greycurve denotes the actual number of cases. The dotted linedenotes the reported performance of the model in [13].Figure 4: Prediction accuracy comparison for each modelwith and without segmentation. The x-axis is the end date,and the y-axis is the normalized root mean square error(NRMSE) of the prediction for the number of cases. Thedotted lines denote evaluation results with segmentationperformed, and the solid lines denote evaluation withoutsegmentation.results are based on the assumption of perfect up-to-date knowledge.Results based on more relaxed assumptions are discussed in thefollowing subsection.5.5 Prediction Accuracy across VariedForecasting HorizonWe further examine our models’ prediction accuracy consideringvarying forecasting horizons (the number of days in advance whenmaking the prediction) at three distinct end dates. These datesare selected based on the previous empirical results to be repre-sentatives of the different waves. This setting mirrors the real-lifecontext where decisions are often needed to be made several daysin advance.The outcome, displayed in Figures 5(a), 5(b), and 5(c), showsan expected trend: an increased forecasting horizon generally cor-responds to decreased prediction accuracy. This trend can be at-tributed to the increased challenges introduced by longer responsetimes. However, there are instances where model accuracy improveswith an increased forecasting horizon, likely due to the inherentvariability in the data. Notably, on all three different dates, GPR andMLP models perform the best likely due to their smaller parametercount and simpler structure. Based on the results, we make the rec-ommendation that 6to12days is a good trade-off between a longerforecasting horizon and better prediction accuracy as the predictionerror generally does not increase much during this period.6 CONCLUSIONSIn this study, we explored the feasibility of utilizing publicly avail-able wastewater data to forecast the number of COVID-19 cases.We employed five representative time-series prediction methodsto capture the temporal associations within the viral wastewaterconcentration levels and case count data. Our empirical resultsshow that the resulting models performed comparably with thosetrained on a more diverse range of data sources, underscoring theviability of this approach even with restricted data access.Furthermore, our research underscores the importance of datasegmentation during evaluation to better comprehend the inherentrelationship between wastewater data and COVID-19 case count.This segmentation approach addresses the complexities posed bytesting data spanning multiple waves, which can influence modelevaluation metrics. Grounded in our empirical findings, we alsopropose practical guidelines regarding the forecasting horizon forcase count prediction.We hope that the findings of this study contribute to the growingbody of research on wastewater-based epidemiology and providevaluable insights into the challenges and potential solutions foraccurate epidemic forecasting using wastewater data, which canbe applied in real-world scenarios to improve public health surveil-lance and inform decision-making processes. We acknowledge thecomplexities introduced by evolving testing and reporting practicesduring the COVID-19 pandemic, which make it increasingly hardto acquire ground truth data, and therefore alternative metrics likemortality data may gain prominence in different stages of epidemi-ological forecasting. We also acknowledge the existence of otherpublicly accessible data sources of varying types that may be uti-lized, including reproductive number[ 12], hospitalization numbers,and mortality rates[ 10,36]. These additional data sources presentample opportunities for future research directions, broadening thescope of our current understanding and forecasting capabilities ofpublic health scenarios.Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Performance comparison w.r.t. #days to react on 2021-12-15(b) Performance comparison w.r.t. #days to react on 2022-07-03(c) Performance comparison w.r.t. #days to react on 2022-10-11Figure 5: Prediction accuracy corresponding to different leadtimes at three different dates. The x-axis indicates the fore-casting horizon, and the y-axis denotes the normalized rootmean square error (NRMSE) of the prediction of the numberof cases. The three different dates are chosen to illustrate themodels’ performance at distinct waves during the pandemic.ACKNOWLEDGMENTSZhicheng Zhang, Fei Fang, Angel Desai, and Sonja Neumeister weresupported in part by grant SES2200228 from the National ScienceFoundation. Maimuna Shahnaz Majumder was supported in part bygrant R35GM146974 from the National Institute of General MedicalSciences, National Institutes of Health. The funders had no role instudy design, data collection and analysis, decision to publish, orpreparation of the manuscript. Zhicheng Zhang was supported inpart by SCS Dean’s Fellowship.
r7bhZSitrVI
Review of Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19
4: Good paper, accept
Summary: Employing several different models, this paper demonstrates how aggregated wastewater data from across the US can be used to forecast COVID-19 cases. This paper also evaluates the optimal horizon for forecasting COVID-19 case data from wastewater signals. Clarity: This paper was well written and the paper’s objectives are clear. There are also clear descriptions for why given models were chosen for this evaluation. To improve upon the clarity, I would suggest the following: --Further explain why case counts were used instead of hospitalization counts as the COVID-19 outcome metric. The given explanation in the paper is that, “...case count data becomes an effective indicator of the strain on the healthcare system and the potential long-term effects of SARS-CoV-2 infection”. However, this same logic applies to COVID-19 hospitalization data, which did not suffer from the same notorious underreporting as case data did. This is not to say case data shouldn’t be used, just it is not clear why this was the outcome metric chosen. -- it is unclear at what time stamp “ground truth” data was being pulled. Did the authors use case data as-of the date of model evaluation (i.e., potentially revised case data)? Or only case data available the week of wastewater data collection? --Based on figures 5a - 5c, the error (measured in NRMSE) does not look much worse at 9 days vs. 6 days. It would help if the authors could clarify more objectively how the cutoff for 6-8 days was determined as the optimal horizon period. Minor comments on clarity: --A short sentence or two about what is being measured in the wastewater would be beneficial. What gene is being targeted / measured to determine COVID-19 concentrations? --Because there is so much variability in wastewater data, it might be helpful to mention how the prediction intervals of models are impacted by the changes in the wastewater data. --In section 4.1.1, definitions are needed for variables in equation 1. It is not immediately apparent what each different “x” represents. Additionally, in section 4.1.1, the authors mention an evaluation of data delays in section 5.5, however section 5.5 is about horizons, and not data delays. --A note that the hyperlinks for footnotes 1 and 2 are broken. Originality: There are similar articles that have compared modeling approaches on their ability to predict COVID-19 cases from wastewater data, however, this paper adds to the growing body of literature by using a segmentation approach on publicly available data, as well as determining the ideal forecast based on prediction accuracy and maximizing a longer forecasting horizon. Significance: The significance of this paper is that it demonstrates that numerous modeling approaches provide similar results when using wastewater data to predict COVID-19 cases at a national level. This paper could be improved by adding discussion of the biases and limitations of using data aggregated at a national level, and demonstrating how well these models perform at a smaller geographic scale. Discussion of the confidence levels of these models would also be beneficial, as would using multiple metrics to evaluate model performance. Pros: --Well written paper with many clear discussions about decisions made in the experimental process. -- Use of publicly available data makes methods replicable. -- Project opens the door for additional analyses that can be done using wastewater data, as well as additional variables that can be added to the analysis. Cons: --As noted in sections above, the paper could expand on the limitations of using wastewater and case data at the national level, such as heterogeneity in COVID-19 across the county, non-standardized collection approaches across counties / states, variation in sampling sites, etc. --The determination of why 6-8 days is the optimal horizon is not clear from the figures presented.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
J8Gc5acxME
KDD.org/2023/Workshop/epiDAMIK
2023
Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19
["Zhicheng Zhang", "Sonja Neumeister", "Angel Desai", "Maimuna S. Majumder", "Fei Fang"]
The COVID-19 pandemic has emphasized the necessity for effective tools to monitor and predict epidemiological trends. Traditional approaches to disease surveillance possess certain limitations, leading to the emergence of wastewater-based epidemiology (WBE) as a complementary approach. WBE has demonstrated a strong correlation with traditional epidemiological indicators (e.g., number of clinical cases and hospitalization), which makes it a valuable asset in informing public health decision-making processes. Despite the promising prospects of WBE, it faces certain challenges, including restricted data accessibility, geographical bias in data coverage, high data noise levels, and significant data distribution shifts. In this study, we examine the feasibility of utilizing exclusively two publicly available data, specifically aggregated wastewater data and reported case counts, for epidemiological forecasting in the COVID-19 pandemic. We incorporate a variety of statistical and machine learning models in an attempt to address the inherent volatility and bias of the data. We further introduce the usage of the segmentation method during the evaluation phase as a better evaluation metric. Our empirical results show that, even with limited data, performing epidemiological forecasting is possible, and its performance is comparable with methods that use more diverse data sources, suggesting its potential for broader health applications. Additionally, we utilize the insights from results on the length of the forecasting horizon to provide practical guidelines regarding real-world prediction.
["COVID-19", "Disease Surveillance", "Wastewater-Based Epidemiology", "Time-Series Forecasting"]
ABSTRACTThe COVID-19 pandemic has emphasized the necessity for effectivetools to monitor and predict epidemiological trends. Traditionalapproaches to disease surveillance possess certain limitations, lead-ing to the emergence of wastewater-based epidemiology (WBE) asa complementary approach. WBE has demonstrated a strong cor-relation with traditional epidemiological indicators (e.g., numberof clinical cases and hospitalization), which makes it a valuableasset in informing public health decision-making processes. De-spite the promising prospects of WBE, it faces two main challenges,restricted data accessibility and high intrinsic noise and distributionshift in the data. In this study, we examine the feasibility of utiliz-ing exclusively two publicly available data, specifically aggregatedwastewater data and reported case counts, for epidemiological fore-casting in the COVID-19 pandemic. We incorporate a variety ofstatistical and machine learning models in an attempt to addressthe inherent volatility and bias of the data. We further introduce theusage of the segmentation method during the evaluation phase as abetter evaluation metric. Our empirical results show that, even withlimited data, performing epidemiological forecasting is possible,and its performance is comparable with methods that use morediverse data sources, suggesting its potential for broader healthapplications. Additionally, we utilize the insights from results onthe length of the forecasting horizon to provide practical guidelinesregarding real-world prediction.KEYWORDSCOVID-19, Disease Surveillance, Wastewater-Based Epidemiology,Time-Series ForecastingACM Reference Format:Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna ShahnazMajumder, and Fei Fang . 2023. Unlocking the Potential of Public Datasets:Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).Wastewater-Based Epidemiological Forecasting During COVID-19. In epi-DAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop on Epi-demiology meets Data Mining and Knowledge Discovery, August 7, 2023, LongBeach, CA, USA. ACM, New York, NY, USA, 8 pages.1 INTRODUCTIONThe COVID-19 pandemic has emphasized the importance of reliabletools for monitoring and forecasting epidemiological trends. Tradi-tional disease surveillance approaches, based on clinical data, havelimitations in both timeliness and coverage. Wastewater-based epi-demiology (WBE) has thus emerged as a complementary approachto track the spread of infectious diseases in communities [ 8]. WBEhas demonstrated significant potential in the monitoring and fore-casting of epidemics, particularly during the COVID-19 pandemic.Several studies have utilized wastewater data to forecast clinicalcases, hospitalizations, and ICU admissions, as well as to evaluatethe effectiveness of governmental policies in containing COVID-19 transmission [ 10,12,13,27]. Studies have found a strong linkbetween data from wastewater surveillance and disease indica-tors. This link can help make better health decisions, use resourceswisely, and put interventions in place quickly.However, despite the promising results of WBE, there are twomain challenges that need to be addressed for broader practicalapplications, which haven’t been thoroughly explored in the ex-isting literature. First, current approaches in using WBE mainlyrely on small-scale, privately collected data, such as those fromuniversity campuses [ 36], or inaccessible private-sector wastew-ater data [ 10,12]. Often, methods supplement wastewater datawith additional data sources, including Community VulnerabilityIndex (CCVI) and vaccination records [ 13]. In a broader context,the sharing of wastewater data is restricted, and its coverage isgeographically skewed towards economically developed areas thathave a greater number of wastewater monitoring facilities [ 18,23].Second, the real-world epidemiological data is inherently noisydue to various factors such as sampling errors and challenges inattributing causes [ 24]. This issue is further exacerbated duringglobal pandemics like COVID-19, where the temporal correlationswithin the data can drastically shift over the course of the pandemic,undermining the accuracy of predictions. Such drastic shifts canoccur when a new variant emerges and rapidly becomes dominantor when vaccination rates significantly increase, both of whichepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fangcause distinct changes in epidemiological trends. These shifts un-derscore the need for robust forecasting models capable of adaptingto evolving pandemic dynamics.In this study, we focus on two publicly available datasets: ag-gregated wastewater data and reported case counts, both at thecountry level. This selection of datasets is driven by the ready ac-cessibility and reliability of these data sources: wastewater data isregularly published not only by the CDC’s National WastewaterSurveillance System (NWSS) but also by other agencies adhering toCDC protocols, while case count numbers are widely reported. Thiswidespread adoption of consistent data-gathering protocols ensuresthe broad availability and comparability of these datasets. It alsoaims to alleviate volatility and mitigate biases inherent in smalleror less developed regions. The COVID-19 pandemic’s landscapehas been constantly changing, influencing how we assess its spreadand impact. Initially, the case count data, encompassing both severeand mild cases, offered valuable insight into the pandemic’s trajec-tory. This metric was particularly comprehensive during periods ofwidespread testing and reporting. However, as the pandemic hasprogressed, testing methods and reporting practices have evolved,with an increase in home testing and a decrease in reports to gov-ernmental agencies. While these changes present challenges, casecount still servers as a strong signal of disease prevalence. Our coreobjective here is to investigate the feasibility of using only thesetwo publicly available data sources, case counts and wastewaterdata, for epidemiological forecasting.To evaluate this feasibility, we model the problem as a time-seriesforecasting problem characterized by significant distribution shiftsin the data over time. We employ data preprocessing techniques tomanage misaligned time-series data and introduce a segmentationalgorithm during the evaluation phase to account for temporalshifts. This segmentation method enhances evaluation accuracy byensuring that the test data spans only one wave so that the test errorwould no longer be masked by the results in other waves, and weempirically evaluate it to be a better evaluation criterion. To balanceinterpretability, simplicity, and prediction accuracy, we implementa variety of statistical and machine learning models, includinglinear regression, ARIMAX, Gaussian Process Regression, multi-layer perceptron (MLP), and Long Short-Term Memory (LSTM)networks. The diversity of these modeling techniques enables us tocompare the efficiency of simpler models with their more complex,deep-learning counterparts. Finally, our analysis shows that byonly using aggregated wastewater data and reported case counts,we can achieve comparable performance with a random-forestmodel trained on diverse data sources, including CCVI indexes, andvaccination records in [ 13]. We further empirically demonstratethat the segmentation method provides a more accurate evaluation,particularly during volatile periods such as the case count peak inearly 2022. Based on the empirical results on the effect of forecastinghorizon of different lengths, we provide a practical recommendationfor selecting the forecasting horizon in order to optimize the balancebetween reaction time and prediction accuracy.2 RELATED WORKWastewater-based epidemiology. Wastewater-based epidemiol-ogy (WBE) has become an important tool for monitoring and fore-casting epidemiological trends over the past two decades [ 8]. Dur-ing the recent outbreak of COVID-19 [ 6], wastewater data wasused to forecast clinical cases, hospitalizations, and ICU admis-sions, as well as to evaluate the effectiveness of governmental poli-cies [ 10,12,12,13,27]. Galani et al . [10] , Kaplan et al . [12] , Stephenset al. [27] measured the wastewater for a number of monitoringsites and empirically demonstrated a strong correlation betweenhospitalizations and wastewater surveillance data using regressionmodels. Kaplan et al . [12] used wastewater data to estimate repro-ductive numbers. Li et al . [13] used data from 100USA counties topredict hospital and ICU admission numbers using random forestmodels.However, despite its effectiveness in predicting epidemiologicaltrends, wastewater data were not widely shared with the public oraccessible to researchers, making it infeasible to perform additionalanalyses [ 18]. Current works often rely on small-scale, privatelycollected dataset [ 36], or supplement the dataset with other diversesources of data, like vaccination records and CCVI indexes [ 13].In addition, the coverage of wastewater data is severely biasedtoward economically more developed geographic regions with morewastewater monitoring facilities [ 18,23]. In an attempt to addressthese challenges, our approach differs from previous work in that weaim to assess the promise of using exclusively two publicly availabledata sources: aggregated wastewater data and the reported casecount data that are easily accessible to the public for epidemiologicalforecasting. Specifically, we focus on data within the United Stateswhile averaging it across the country to minimize bias in wastewaterdata from smaller or less-developed counties and states.Time-series forecasting. Time series forecasting has been a long-standing problem in the fields of statistics and machine learning, at-tracting significant research attention. Classical methods [ 3,16] pro-vide a comprehensive understanding of time series analysis and fore-casting and offer both theoretical insights and statistical guarantees.The advent of deep learning-based methods, particularly recurrentnetworks, has substantially improved the ability to capture temporalcorrelations in training data, as demonstrated by works including re-current neural networks (RNNs) [ 22] and long short-term memory(LSTM) networks [ 11]. In recent years, long-term series forecasting(LSTF) research has focused on transformer-based models [ 30] dueto their remarkable success in various application domains, suchas natural language processing (NLP) [ 20] and computer vision(CV) [ 15]. Transformer-based LSTF models [ 14,32,34,37,38] havedemonstrated impressive forecasting performance while also priori-tizing prediction efficiency. However, recent criticism by Zeng et al .[35] suggests that the self-attention mechanism in transformersinevitably leads to temporal information loss, and their empiricalresults indicate that these models may not even outperform simpleone-layer linear models in certain experiments.In the domain of time series forecasting with scarce data, deeplearning models frequently adopt less complicated architectures toenhance model performance. Tsaur [29] employed fuzzy grey re-gression models, while Abdulmajeed et al . [1] utilized an ensembleUnlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAof several auto-regressive models to improve accuracy and robust-ness in predicting COVID-19 cases in Nigeria. Informed by theseinsights, our approach emphasizes the use of simpler and moreinterpretable models when working with limited wastewater andcase count data aggregated across the country. Specifically, we em-ployed linear regression models, ARIMAX models, and Gaussianprocess regression models with a combination of kernels to addressthe problem of noise in the data. Additionally, we conducted a com-parative analysis with deep learning models, including multi-layerperceptron (MLP) and LSTM models, to evaluate the effectivenessof our chosen methodology in the context of limited data.3 PRELIMINARIESTime-series forecasting. The primary objective of time-series fore-casting [ 19,25] is to make accurate predictions of future values in asequence, utilizing historical observations as a basis. Consider a setof observed data points x1,..., xt, where xi∈X, the aim is to fore-cast the corresponding labels y1,..., ytfor each timestep, rangingfrom 1tot, with yi∈Y. Lethrepresent the look-back window size;when predicting the label yi, the prediction model can take as inputH={xi−h+1,..., xi}orH={xi−h+1,..., xi,yi−h+1,..., yi−1}.This constraint ensures that predictions rely solely on informationavailable within the specified historical context.Wastewater-based Epidemiology. Wastewater-based epidemiol-ogy (WBE) is an approach to public health surveillance that lever-ages the detection of biological or chemical markers present insewage to reflect the health status of a region [ 21]. In the case ofCOVID-19, the wastewater data measures genetic fragments ofSARS-CoV-2 virus excreted in stool, specifically targeting the N1and N2 regions of the nucleocapsid gene, to determine COVID-19concentrations.4 METHODIn this section, we detail our data preprocessing steps, modelingtechniques, and evaluation methods. Our focus of the trainingmethod lies in aligning misaligned time-series data, computinginput embeddings, and employing models that strike a balancebetween simplicity, interpretability, and predictive accuracy. Wealso introduce a wave-based segmentation approach for evaluation,arguing its effectiveness as a more accurate metric and discussingits calibration using expert-identified waves.4.1 Data ProcessingTo ensure the quality and consistency of the data used for train-ing and evaluation, we first address the challenge of misalignedtime series data and then segment the data into waves based onthe observed distribution shifts. These preprocessing steps aim toimprove the model’s reliability and adaptability to changes in theunderlying data distribution over time.4.1.1 Handling Misaligned Time-Series Data. Dealing with incon-sistent time intervals or irregular timestamps in time-series fore-casting is a common challenge. In our study, the primary issuearises from weekly updates of wastewater data ( xi) and the dailyupdates of case count data ( yi). There are two main strategies toaddress this: removing data points without corresponding labels orutilizing all available data, for instance, through interpolation [ 31].Our approach is to associate each element xtin the wastewaterdatasetXwith all elements that fall within the interval betweentwo successive wastewater data updates. Specifically, for each xtinthe datasetX, we define:xt={xt}∪{yi|Txt−1<Tyi<Tyt} (1)whereTxdenotes the timestamp of the event x, andytis treatedas the ground truth label. The augmented xtnow includes thewastewater data point at time tand all case count data pointswhose timestamps Tyiare strictly greater than the timestamp Txt−1of the preceding wastewater data point and strictly less than thetimestampTxtof the current wastewater data point. The reasonbehind this decision is to maximize data utilization. However, itmay not always reflect real-world scenarios, where all data mightnot be up-to-date, or future trends a few days from now need to bepredicted. We empirically evaluate the impact of such delays whendoing forecasting in Section 5.5.4.1.2 Embedding of input data. As shown in Figure 1, there existsa lead-lag relationship [ 4,13] between the wastewater data andthe case count data. Specifically, signals in the wastewater dataoften precede signals in the case count data by a span of severaldays or weeks. To accommodate this time-shifted relationship, weimplement a sliding window approach for both the wastewater andcase count data inputs.Formally, for a selected time point i, and a window size hwforwastewater data and hcfor case count data, we generate inputsequencesXwastewateriandXcasecountirespectively, as:Xwastewateri=[wi−hw,...,wi−lw]Xcasecounti=[ci−hc,...,ci−lc],(2)wherewjdenotes the wastewater data and cjdenotes the casecount data at time j.lcandlware used to simulate the informationavailable at the time of prediction in the real-world. lw=lc=1means that the prediction model is given all the data up-to-date.To maintain scale consistency across all data points, we nor-malize the case count data using a min-max scaler, deriving thescaling parameters from historical data. This process ensures thedata maintains its inherent trend and distribution characteristicswhile being compatible with the model input, especially the deeplearning models.4.2 Modeling Techniques for Time-series DataIn the context of limited data, the ideal model to capture tempo-ral correlations should balance simplicity, interpretability, and alower parameter count. More complex models, while potentiallyimproving performance, might overfit the data and compromiseinterpretability and deployability. Therefore, in this study, our em-phasis is on methodologies that ensure adequate predictive accuracywhile maintaining computational feasibility and transparency ininterpreting data patterns.(1)Linear Regression Model [ 17]: Used as a benchmark, thissimple model provides a baseline for performance compari-son.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fang(2)ARIMAX Model [ 2]: Serving as a robust statistical model,ARIMAX extends the traditional ARIMA model by incorpo-rating exogenous inputs, which helps in modeling complextemporal structures in the presence of influential externalfactors, which suits our dataset with a lead-lag relationship.(3)Gaussian Process Regression (GPR) Model: This model lever-ages a custom kernel for handling non-linear relationshipsand noisy data. Our kernel construction, formulated as below,involves a multiplicative interaction of Constant and RBF ker-nels, along with an additive incorporation of a White kernelfor noise management and a Matern kernel for smoothness.(4)Multi-layer perceptron (MLP): A widely employed neuralnetwork for regression problems, our implementation fea-tures two hidden layers with 128 units each and ReLU as theactivation function.(5)Long Short-Term Memory (LSTM) model [ 11]: As a type ofrecurrent neural network, LSTMs are capable of capturingtemporal dependencies in data, making them well-suited fortime series forecasting tasks. LSTMs can learn to filter outnoise by selectively retaining valuable information throughgating mechanisms. To mitigate overfitting, we incorporatea dropout [ 26] rate of 0.5 after each layer in the model andadded anL2regularization.4.3 Wave-based SegmentationOne important observation for pandemic-related data is the dy-namic nature of the underlying distribution over time. This variabil-ity can be attributed to several factors, including the emergence ofdifferent viral variants [ 5], changes in vaccination status among thepopulation [ 7], and the implementation of varied government poli-cies [ 33]. The presence of these distribution shifts significantly com-plicates the prediction process. To address this issue, we proposesplitting the data into waves, where each wave is assumed to havea relatively stable distribution. We employ Binary Change Point De-tection [ 9] for identifying time-series data change points, chosen forits multiple change point detection, no predetermined change pointrequirement, and computationally efficient O(Cnlogn)complexity.4.3.1 Hyperparameter Calibration. Once the waves are identified,we calibrate the model’s hyperparameters, including the cost func-tion, penalty term, and minimal distance between two changepoints, to fit the waves recognized by domain experts. We for-mulate a scoring function and select the optimal hyperparame-ters on the validation data. Given a set of detected change pointsCP={cp1,cp2,...,cpn}and a set of expert-identified waves W={w1,w2,...,wm}, we define a score function asS(CP,W,α,β)=m∑︁i=1exp(−αd(wi,CP))−β|n−m|,(3)whereαis the decay factor for the impact of the distance betweenthe detected change points and the actual waves, βis the penalty co-efficient that penalizes the absolute difference between the numberof detected waves and the number of actual waves, d(wi,CP)de-notes the closest distance between wave wiand the set of detectedchange points in CP. The objective is to find hyperparameters thatminimize this score:CP★=arg minα,βS(CP,W,α,β). (4)Minimizing this metric allows us to select the hyperparametersthat optimally align the detected change points with the expert-identified waves while balancing proximity and the penalty for thedifference in the number of change points and waves.4.3.2 Evaluation using Wave-based Segmentation. Our approachleverages wave-based segmentation for evaluation. Once we sepa-rate our dataset Dinto training, Dtrain, and testing sets, Dtest, we re-strict the test data to have just one segment. Mathematically, if Stestrepresents all segments in Dtest, we would ensure that |Stest|=1.This methodology mirrors real-world conditions more accurately,as predicting data of new waves often requires substantial additionalinformation. We avoid using wave-based segmentation in trainingdue to potential data leakage issues, as it commonly uses globaldata to determine segmentation, which could inadvertently affectthe results.5 EXPERIMENTSIn this section, we outline the experimental setup, including datavisualization and segmentation results, and present the empiricalresults obtained by evaluating the five models for the task of pre-dicting case counts.5.1 Experimental SetupOur experiments exclusively use publicly available data, namelywastewater data1and case count data2, count and death which areoriginally aggregated at the county or state level and therefore, poseinherent challenges due to their noisy nature. The case count dataserve as ground truth for our prediction task. Owing to variabilityin the collection of country/state-level data, we aggregate all data atthe national level and utilize the nationwide average for our analysis.Composed of wastewater data and case count data, our datasetspans from January 15, 2020, to February 15, 2023. Wastewater datais reported on a weekly basis (162 data points), while case countdata are collected daily (1128 data points). For all the experiments,we report the mean and standard deviation of 6runs.To better understand the correlation between wastewater dataand the case counts, we visualize the trends in the data in Figure 1.We aggregate the data at the national level due to the high variabil-ity and statistical noise inherent in the state-wise data, as evidencedin Figure 1(b). As shown in Figure 1 with the shifted wastewatercurve, a strong association exists between the trend of virus con-centration levels in wastewater and that of the number of cases,with wastewater data trends slightly preceding that of case counts.However, it is important to underscore that despite the exhibitedassociation between the two trends, the relationship between theirabsolute numbers is not straightforward.1https://github.com/biobotanalytics/covid19-wastewater-data2https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Aggregated trend of the nation(b) Trend in Georgia and MississippiFigure 1: Temporal Correlation between Wastewater ViralConcentrations and Case Counts per 100k population. Thex-axis shows the dates ranging from 2020-01-15 to 2023-02-15, and the y-axis denotes the values of the viral wastewaterconcentrations and the number of cases per 100k population.Subfigure (a) describes the aggregated trend of the nation,and (b) describes two randomly picked states of Georgia andMississippi.5.2 Visualization of Segmentation ResultAfter calibrating the hyperparameters on the expert-identifiedwaves from March 2020 to February 2022 [ 28], we use the BinaryChange Point algorithm [ 9] to detect the change points in thewastewater virus concentration level data. In our case, the expertdata segmentation consists of five points, forming six distinct waves.As a result, we opted to include all of these points for the calcula-tion of the score function during the calibration process. Figure 2demonstrates that the detected change points closely align withthe expert-identified waves and that our method can accuratelydetect change points even in areas not covered by the expert datasegmentation.5.3 Evaluation across Varied End DatesTo assess the accuracy of our models, we evaluate their performancethroughout the course of the pandemic. Figure 3 represents theNormalized Root Mean Square Error (NRMSE) of each model overthe different end dates, allowing for a comparative analysis of modelconsistency and adaptability across time. We compare our resultsFigure 2: Segmentation results using Binary Change PointDetection. The green dotted lines represent expert-identifiedchange points, while the red dotted lines indicate our de-tected change points. The x-axis denotes the days passedsince 2020-01-15, and the y-axis shows the viral wastewaterconcentration level. Our model’s detected change points ex-hibit close correspondence with expert-identified points.with a random forest model developed by Li et al . [13] . Their modelwas trained on diverse data, including hospitalization and ICUadmission records, CCVI indexes, and vaccination records, amongothers. Notably, their work does not clearly delineate the date rangefor the test data—a factor that could significantly impact the model’saccuracy.Figure 3 shows that the models perform relatively poorly inthe early stages of the pandemic but improve significantly in thelater stages, even during a sudden peak in early 2022. In the laterstages of the pandemic (after July 2021), as shown in Figure 3, allfive models reach performance on par with the baseline model,indicating an NRMSE below 1.0. This suggests that, on average,the model’s prediction error is less than the standard deviation ofthe observed data, which is over 200cases during the peak. Theperformance at the early stages is worse, possibly due to the lackof sufficient data to learn the inherent temporal correlation.5.4 Impact of Segmentation on EvaluationIn addition to evaluating the performance on different dates, wealso conduct an experiment to understand how wave segmentationimpacts the evaluation of our models. Figure 4 shows model perfor-mance with and without segmentation for the models. Performancedifferences are more noticeable during peak periods, likely due torapid trend shifts that make the prediction task difficult.We remark that this experiment highlights the importance ofsegmentation in this task of predicting case counts, particularlyduring volatile periods. The omission of this segmentation method,as is the case in [ 13], could lead to inaccuracies in the NormalizedRoot Mean Square Error (NRMSE) as multiple waves in the testdata may mask inaccuracies with one particular wave. Therefore,we present the results with the segmentation evaluation methodfor all subsequent experiments. It is also worth noting that theseepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei FangFigure 3: Performance comparison of models across end dates.The x-axis denotes the end date of the test period, while they-axis represents the normalized root mean square error(NRMSE) of the prediction for the number of cases. The greycurve denotes the actual number of cases. The dotted linedenotes the reported performance of the model in [13].Figure 4: Prediction accuracy comparison for each modelwith and without segmentation. The x-axis is the end date,and the y-axis is the normalized root mean square error(NRMSE) of the prediction for the number of cases. Thedotted lines denote evaluation results with segmentationperformed, and the solid lines denote evaluation withoutsegmentation.results are based on the assumption of perfect up-to-date knowledge.Results based on more relaxed assumptions are discussed in thefollowing subsection.5.5 Prediction Accuracy across VariedForecasting HorizonWe further examine our models’ prediction accuracy consideringvarying forecasting horizons (the number of days in advance whenmaking the prediction) at three distinct end dates. These datesare selected based on the previous empirical results to be repre-sentatives of the different waves. This setting mirrors the real-lifecontext where decisions are often needed to be made several daysin advance.The outcome, displayed in Figures 5(a), 5(b), and 5(c), showsan expected trend: an increased forecasting horizon generally cor-responds to decreased prediction accuracy. This trend can be at-tributed to the increased challenges introduced by longer responsetimes. However, there are instances where model accuracy improveswith an increased forecasting horizon, likely due to the inherentvariability in the data. Notably, on all three different dates, GPR andMLP models perform the best likely due to their smaller parametercount and simpler structure. Based on the results, we make the rec-ommendation that 6to12days is a good trade-off between a longerforecasting horizon and better prediction accuracy as the predictionerror generally does not increase much during this period.6 CONCLUSIONSIn this study, we explored the feasibility of utilizing publicly avail-able wastewater data to forecast the number of COVID-19 cases.We employed five representative time-series prediction methodsto capture the temporal associations within the viral wastewaterconcentration levels and case count data. Our empirical resultsshow that the resulting models performed comparably with thosetrained on a more diverse range of data sources, underscoring theviability of this approach even with restricted data access.Furthermore, our research underscores the importance of datasegmentation during evaluation to better comprehend the inherentrelationship between wastewater data and COVID-19 case count.This segmentation approach addresses the complexities posed bytesting data spanning multiple waves, which can influence modelevaluation metrics. Grounded in our empirical findings, we alsopropose practical guidelines regarding the forecasting horizon forcase count prediction.We hope that the findings of this study contribute to the growingbody of research on wastewater-based epidemiology and providevaluable insights into the challenges and potential solutions foraccurate epidemic forecasting using wastewater data, which canbe applied in real-world scenarios to improve public health surveil-lance and inform decision-making processes. We acknowledge thecomplexities introduced by evolving testing and reporting practicesduring the COVID-19 pandemic, which make it increasingly hardto acquire ground truth data, and therefore alternative metrics likemortality data may gain prominence in different stages of epidemi-ological forecasting. We also acknowledge the existence of otherpublicly accessible data sources of varying types that may be uti-lized, including reproductive number[ 12], hospitalization numbers,and mortality rates[ 10,36]. These additional data sources presentample opportunities for future research directions, broadening thescope of our current understanding and forecasting capabilities ofpublic health scenarios.Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Performance comparison w.r.t. #days to react on 2021-12-15(b) Performance comparison w.r.t. #days to react on 2022-07-03(c) Performance comparison w.r.t. #days to react on 2022-10-11Figure 5: Prediction accuracy corresponding to different leadtimes at three different dates. The x-axis indicates the fore-casting horizon, and the y-axis denotes the normalized rootmean square error (NRMSE) of the prediction of the numberof cases. The three different dates are chosen to illustrate themodels’ performance at distinct waves during the pandemic.ACKNOWLEDGMENTSZhicheng Zhang, Fei Fang, Angel Desai, and Sonja Neumeister weresupported in part by grant SES2200228 from the National ScienceFoundation. Maimuna Shahnaz Majumder was supported in part bygrant R35GM146974 from the National Institute of General MedicalSciences, National Institutes of Health. The funders had no role instudy design, data collection and analysis, decision to publish, orpreparation of the manuscript. Zhicheng Zhang was supported inpart by SCS Dean’s Fellowship.
BTgzkybmbV
WBE Forecasting during Covid-19
3: Marginally above acceptance threshold
# Clarity I found the work easy to follow and well written. # Quality The applications of the work are clear for epidemiologists and data scientists. # Originality The data set used is novel, but the methods themselves are well studied and fairly straightforward. The authors describe their experiments for using wastewater based epidemiology (WBE) methods for case count prediction at the national level versus traditional epidemiological methods, which may require more extensive and less commonly available data. The experiments show similar levels of accuracy for prediction of case counts moving forward given prior time series data. The work is novel in the questions that it asks and the analysis that it provides on the foundations of WBE. I think that the data set itself could be expanded on, however. The specific value being measured against is only mentioned in Figure 1 (Effective copies of genome per $\mu L$ ), and it is unclear if there are other predictive factors being looked at. The authors note that they aggregate wastewater data to a country level for making predictions due to the biases in data collection, but is national level data granular enough to be useful? The authors could do an analysis on the more regional data as well to see if the accuracy of their predictions holds up at the county/city level. This could be used as evidence for expansion of this data collection into these more rural areas as well.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
J8Gc5acxME
KDD.org/2023/Workshop/epiDAMIK
2023
Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19
["Zhicheng Zhang", "Sonja Neumeister", "Angel Desai", "Maimuna S. Majumder", "Fei Fang"]
The COVID-19 pandemic has emphasized the necessity for effective tools to monitor and predict epidemiological trends. Traditional approaches to disease surveillance possess certain limitations, leading to the emergence of wastewater-based epidemiology (WBE) as a complementary approach. WBE has demonstrated a strong correlation with traditional epidemiological indicators (e.g., number of clinical cases and hospitalization), which makes it a valuable asset in informing public health decision-making processes. Despite the promising prospects of WBE, it faces certain challenges, including restricted data accessibility, geographical bias in data coverage, high data noise levels, and significant data distribution shifts. In this study, we examine the feasibility of utilizing exclusively two publicly available data, specifically aggregated wastewater data and reported case counts, for epidemiological forecasting in the COVID-19 pandemic. We incorporate a variety of statistical and machine learning models in an attempt to address the inherent volatility and bias of the data. We further introduce the usage of the segmentation method during the evaluation phase as a better evaluation metric. Our empirical results show that, even with limited data, performing epidemiological forecasting is possible, and its performance is comparable with methods that use more diverse data sources, suggesting its potential for broader health applications. Additionally, we utilize the insights from results on the length of the forecasting horizon to provide practical guidelines regarding real-world prediction.
["COVID-19", "Disease Surveillance", "Wastewater-Based Epidemiology", "Time-Series Forecasting"]
ABSTRACTThe COVID-19 pandemic has emphasized the necessity for effectivetools to monitor and predict epidemiological trends. Traditionalapproaches to disease surveillance possess certain limitations, lead-ing to the emergence of wastewater-based epidemiology (WBE) asa complementary approach. WBE has demonstrated a strong cor-relation with traditional epidemiological indicators (e.g., numberof clinical cases and hospitalization), which makes it a valuableasset in informing public health decision-making processes. De-spite the promising prospects of WBE, it faces two main challenges,restricted data accessibility and high intrinsic noise and distributionshift in the data. In this study, we examine the feasibility of utiliz-ing exclusively two publicly available data, specifically aggregatedwastewater data and reported case counts, for epidemiological fore-casting in the COVID-19 pandemic. We incorporate a variety ofstatistical and machine learning models in an attempt to addressthe inherent volatility and bias of the data. We further introduce theusage of the segmentation method during the evaluation phase as abetter evaluation metric. Our empirical results show that, even withlimited data, performing epidemiological forecasting is possible,and its performance is comparable with methods that use morediverse data sources, suggesting its potential for broader healthapplications. Additionally, we utilize the insights from results onthe length of the forecasting horizon to provide practical guidelinesregarding real-world prediction.KEYWORDSCOVID-19, Disease Surveillance, Wastewater-Based Epidemiology,Time-Series ForecastingACM Reference Format:Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna ShahnazMajumder, and Fei Fang . 2023. Unlocking the Potential of Public Datasets:Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).Wastewater-Based Epidemiological Forecasting During COVID-19. In epi-DAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop on Epi-demiology meets Data Mining and Knowledge Discovery, August 7, 2023, LongBeach, CA, USA. ACM, New York, NY, USA, 8 pages.1 INTRODUCTIONThe COVID-19 pandemic has emphasized the importance of reliabletools for monitoring and forecasting epidemiological trends. Tradi-tional disease surveillance approaches, based on clinical data, havelimitations in both timeliness and coverage. Wastewater-based epi-demiology (WBE) has thus emerged as a complementary approachto track the spread of infectious diseases in communities [ 8]. WBEhas demonstrated significant potential in the monitoring and fore-casting of epidemics, particularly during the COVID-19 pandemic.Several studies have utilized wastewater data to forecast clinicalcases, hospitalizations, and ICU admissions, as well as to evaluatethe effectiveness of governmental policies in containing COVID-19 transmission [ 10,12,13,27]. Studies have found a strong linkbetween data from wastewater surveillance and disease indica-tors. This link can help make better health decisions, use resourceswisely, and put interventions in place quickly.However, despite the promising results of WBE, there are twomain challenges that need to be addressed for broader practicalapplications, which haven’t been thoroughly explored in the ex-isting literature. First, current approaches in using WBE mainlyrely on small-scale, privately collected data, such as those fromuniversity campuses [ 36], or inaccessible private-sector wastew-ater data [ 10,12]. Often, methods supplement wastewater datawith additional data sources, including Community VulnerabilityIndex (CCVI) and vaccination records [ 13]. In a broader context,the sharing of wastewater data is restricted, and its coverage isgeographically skewed towards economically developed areas thathave a greater number of wastewater monitoring facilities [ 18,23].Second, the real-world epidemiological data is inherently noisydue to various factors such as sampling errors and challenges inattributing causes [ 24]. This issue is further exacerbated duringglobal pandemics like COVID-19, where the temporal correlationswithin the data can drastically shift over the course of the pandemic,undermining the accuracy of predictions. Such drastic shifts canoccur when a new variant emerges and rapidly becomes dominantor when vaccination rates significantly increase, both of whichepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fangcause distinct changes in epidemiological trends. These shifts un-derscore the need for robust forecasting models capable of adaptingto evolving pandemic dynamics.In this study, we focus on two publicly available datasets: ag-gregated wastewater data and reported case counts, both at thecountry level. This selection of datasets is driven by the ready ac-cessibility and reliability of these data sources: wastewater data isregularly published not only by the CDC’s National WastewaterSurveillance System (NWSS) but also by other agencies adhering toCDC protocols, while case count numbers are widely reported. Thiswidespread adoption of consistent data-gathering protocols ensuresthe broad availability and comparability of these datasets. It alsoaims to alleviate volatility and mitigate biases inherent in smalleror less developed regions. The COVID-19 pandemic’s landscapehas been constantly changing, influencing how we assess its spreadand impact. Initially, the case count data, encompassing both severeand mild cases, offered valuable insight into the pandemic’s trajec-tory. This metric was particularly comprehensive during periods ofwidespread testing and reporting. However, as the pandemic hasprogressed, testing methods and reporting practices have evolved,with an increase in home testing and a decrease in reports to gov-ernmental agencies. While these changes present challenges, casecount still servers as a strong signal of disease prevalence. Our coreobjective here is to investigate the feasibility of using only thesetwo publicly available data sources, case counts and wastewaterdata, for epidemiological forecasting.To evaluate this feasibility, we model the problem as a time-seriesforecasting problem characterized by significant distribution shiftsin the data over time. We employ data preprocessing techniques tomanage misaligned time-series data and introduce a segmentationalgorithm during the evaluation phase to account for temporalshifts. This segmentation method enhances evaluation accuracy byensuring that the test data spans only one wave so that the test errorwould no longer be masked by the results in other waves, and weempirically evaluate it to be a better evaluation criterion. To balanceinterpretability, simplicity, and prediction accuracy, we implementa variety of statistical and machine learning models, includinglinear regression, ARIMAX, Gaussian Process Regression, multi-layer perceptron (MLP), and Long Short-Term Memory (LSTM)networks. The diversity of these modeling techniques enables us tocompare the efficiency of simpler models with their more complex,deep-learning counterparts. Finally, our analysis shows that byonly using aggregated wastewater data and reported case counts,we can achieve comparable performance with a random-forestmodel trained on diverse data sources, including CCVI indexes, andvaccination records in [ 13]. We further empirically demonstratethat the segmentation method provides a more accurate evaluation,particularly during volatile periods such as the case count peak inearly 2022. Based on the empirical results on the effect of forecastinghorizon of different lengths, we provide a practical recommendationfor selecting the forecasting horizon in order to optimize the balancebetween reaction time and prediction accuracy.2 RELATED WORKWastewater-based epidemiology. Wastewater-based epidemiol-ogy (WBE) has become an important tool for monitoring and fore-casting epidemiological trends over the past two decades [ 8]. Dur-ing the recent outbreak of COVID-19 [ 6], wastewater data wasused to forecast clinical cases, hospitalizations, and ICU admis-sions, as well as to evaluate the effectiveness of governmental poli-cies [ 10,12,12,13,27]. Galani et al . [10] , Kaplan et al . [12] , Stephenset al. [27] measured the wastewater for a number of monitoringsites and empirically demonstrated a strong correlation betweenhospitalizations and wastewater surveillance data using regressionmodels. Kaplan et al . [12] used wastewater data to estimate repro-ductive numbers. Li et al . [13] used data from 100USA counties topredict hospital and ICU admission numbers using random forestmodels.However, despite its effectiveness in predicting epidemiologicaltrends, wastewater data were not widely shared with the public oraccessible to researchers, making it infeasible to perform additionalanalyses [ 18]. Current works often rely on small-scale, privatelycollected dataset [ 36], or supplement the dataset with other diversesources of data, like vaccination records and CCVI indexes [ 13].In addition, the coverage of wastewater data is severely biasedtoward economically more developed geographic regions with morewastewater monitoring facilities [ 18,23]. In an attempt to addressthese challenges, our approach differs from previous work in that weaim to assess the promise of using exclusively two publicly availabledata sources: aggregated wastewater data and the reported casecount data that are easily accessible to the public for epidemiologicalforecasting. Specifically, we focus on data within the United Stateswhile averaging it across the country to minimize bias in wastewaterdata from smaller or less-developed counties and states.Time-series forecasting. Time series forecasting has been a long-standing problem in the fields of statistics and machine learning, at-tracting significant research attention. Classical methods [ 3,16] pro-vide a comprehensive understanding of time series analysis and fore-casting and offer both theoretical insights and statistical guarantees.The advent of deep learning-based methods, particularly recurrentnetworks, has substantially improved the ability to capture temporalcorrelations in training data, as demonstrated by works including re-current neural networks (RNNs) [ 22] and long short-term memory(LSTM) networks [ 11]. In recent years, long-term series forecasting(LSTF) research has focused on transformer-based models [ 30] dueto their remarkable success in various application domains, suchas natural language processing (NLP) [ 20] and computer vision(CV) [ 15]. Transformer-based LSTF models [ 14,32,34,37,38] havedemonstrated impressive forecasting performance while also priori-tizing prediction efficiency. However, recent criticism by Zeng et al .[35] suggests that the self-attention mechanism in transformersinevitably leads to temporal information loss, and their empiricalresults indicate that these models may not even outperform simpleone-layer linear models in certain experiments.In the domain of time series forecasting with scarce data, deeplearning models frequently adopt less complicated architectures toenhance model performance. Tsaur [29] employed fuzzy grey re-gression models, while Abdulmajeed et al . [1] utilized an ensembleUnlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAof several auto-regressive models to improve accuracy and robust-ness in predicting COVID-19 cases in Nigeria. Informed by theseinsights, our approach emphasizes the use of simpler and moreinterpretable models when working with limited wastewater andcase count data aggregated across the country. Specifically, we em-ployed linear regression models, ARIMAX models, and Gaussianprocess regression models with a combination of kernels to addressthe problem of noise in the data. Additionally, we conducted a com-parative analysis with deep learning models, including multi-layerperceptron (MLP) and LSTM models, to evaluate the effectivenessof our chosen methodology in the context of limited data.3 PRELIMINARIESTime-series forecasting. The primary objective of time-series fore-casting [ 19,25] is to make accurate predictions of future values in asequence, utilizing historical observations as a basis. Consider a setof observed data points x1,..., xt, where xi∈X, the aim is to fore-cast the corresponding labels y1,..., ytfor each timestep, rangingfrom 1tot, with yi∈Y. Lethrepresent the look-back window size;when predicting the label yi, the prediction model can take as inputH={xi−h+1,..., xi}orH={xi−h+1,..., xi,yi−h+1,..., yi−1}.This constraint ensures that predictions rely solely on informationavailable within the specified historical context.Wastewater-based Epidemiology. Wastewater-based epidemiol-ogy (WBE) is an approach to public health surveillance that lever-ages the detection of biological or chemical markers present insewage to reflect the health status of a region [ 21]. In the case ofCOVID-19, the wastewater data measures genetic fragments ofSARS-CoV-2 virus excreted in stool, specifically targeting the N1and N2 regions of the nucleocapsid gene, to determine COVID-19concentrations.4 METHODIn this section, we detail our data preprocessing steps, modelingtechniques, and evaluation methods. Our focus of the trainingmethod lies in aligning misaligned time-series data, computinginput embeddings, and employing models that strike a balancebetween simplicity, interpretability, and predictive accuracy. Wealso introduce a wave-based segmentation approach for evaluation,arguing its effectiveness as a more accurate metric and discussingits calibration using expert-identified waves.4.1 Data ProcessingTo ensure the quality and consistency of the data used for train-ing and evaluation, we first address the challenge of misalignedtime series data and then segment the data into waves based onthe observed distribution shifts. These preprocessing steps aim toimprove the model’s reliability and adaptability to changes in theunderlying data distribution over time.4.1.1 Handling Misaligned Time-Series Data. Dealing with incon-sistent time intervals or irregular timestamps in time-series fore-casting is a common challenge. In our study, the primary issuearises from weekly updates of wastewater data ( xi) and the dailyupdates of case count data ( yi). There are two main strategies toaddress this: removing data points without corresponding labels orutilizing all available data, for instance, through interpolation [ 31].Our approach is to associate each element xtin the wastewaterdatasetXwith all elements that fall within the interval betweentwo successive wastewater data updates. Specifically, for each xtinthe datasetX, we define:xt={xt}∪{yi|Txt−1<Tyi<Tyt} (1)whereTxdenotes the timestamp of the event x, andytis treatedas the ground truth label. The augmented xtnow includes thewastewater data point at time tand all case count data pointswhose timestamps Tyiare strictly greater than the timestamp Txt−1of the preceding wastewater data point and strictly less than thetimestampTxtof the current wastewater data point. The reasonbehind this decision is to maximize data utilization. However, itmay not always reflect real-world scenarios, where all data mightnot be up-to-date, or future trends a few days from now need to bepredicted. We empirically evaluate the impact of such delays whendoing forecasting in Section 5.5.4.1.2 Embedding of input data. As shown in Figure 1, there existsa lead-lag relationship [ 4,13] between the wastewater data andthe case count data. Specifically, signals in the wastewater dataoften precede signals in the case count data by a span of severaldays or weeks. To accommodate this time-shifted relationship, weimplement a sliding window approach for both the wastewater andcase count data inputs.Formally, for a selected time point i, and a window size hwforwastewater data and hcfor case count data, we generate inputsequencesXwastewateriandXcasecountirespectively, as:Xwastewateri=[wi−hw,...,wi−lw]Xcasecounti=[ci−hc,...,ci−lc],(2)wherewjdenotes the wastewater data and cjdenotes the casecount data at time j.lcandlware used to simulate the informationavailable at the time of prediction in the real-world. lw=lc=1means that the prediction model is given all the data up-to-date.To maintain scale consistency across all data points, we nor-malize the case count data using a min-max scaler, deriving thescaling parameters from historical data. This process ensures thedata maintains its inherent trend and distribution characteristicswhile being compatible with the model input, especially the deeplearning models.4.2 Modeling Techniques for Time-series DataIn the context of limited data, the ideal model to capture tempo-ral correlations should balance simplicity, interpretability, and alower parameter count. More complex models, while potentiallyimproving performance, might overfit the data and compromiseinterpretability and deployability. Therefore, in this study, our em-phasis is on methodologies that ensure adequate predictive accuracywhile maintaining computational feasibility and transparency ininterpreting data patterns.(1)Linear Regression Model [ 17]: Used as a benchmark, thissimple model provides a baseline for performance compari-son.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fang(2)ARIMAX Model [ 2]: Serving as a robust statistical model,ARIMAX extends the traditional ARIMA model by incorpo-rating exogenous inputs, which helps in modeling complextemporal structures in the presence of influential externalfactors, which suits our dataset with a lead-lag relationship.(3)Gaussian Process Regression (GPR) Model: This model lever-ages a custom kernel for handling non-linear relationshipsand noisy data. Our kernel construction, formulated as below,involves a multiplicative interaction of Constant and RBF ker-nels, along with an additive incorporation of a White kernelfor noise management and a Matern kernel for smoothness.(4)Multi-layer perceptron (MLP): A widely employed neuralnetwork for regression problems, our implementation fea-tures two hidden layers with 128 units each and ReLU as theactivation function.(5)Long Short-Term Memory (LSTM) model [ 11]: As a type ofrecurrent neural network, LSTMs are capable of capturingtemporal dependencies in data, making them well-suited fortime series forecasting tasks. LSTMs can learn to filter outnoise by selectively retaining valuable information throughgating mechanisms. To mitigate overfitting, we incorporatea dropout [ 26] rate of 0.5 after each layer in the model andadded anL2regularization.4.3 Wave-based SegmentationOne important observation for pandemic-related data is the dy-namic nature of the underlying distribution over time. This variabil-ity can be attributed to several factors, including the emergence ofdifferent viral variants [ 5], changes in vaccination status among thepopulation [ 7], and the implementation of varied government poli-cies [ 33]. The presence of these distribution shifts significantly com-plicates the prediction process. To address this issue, we proposesplitting the data into waves, where each wave is assumed to havea relatively stable distribution. We employ Binary Change Point De-tection [ 9] for identifying time-series data change points, chosen forits multiple change point detection, no predetermined change pointrequirement, and computationally efficient O(Cnlogn)complexity.4.3.1 Hyperparameter Calibration. Once the waves are identified,we calibrate the model’s hyperparameters, including the cost func-tion, penalty term, and minimal distance between two changepoints, to fit the waves recognized by domain experts. We for-mulate a scoring function and select the optimal hyperparame-ters on the validation data. Given a set of detected change pointsCP={cp1,cp2,...,cpn}and a set of expert-identified waves W={w1,w2,...,wm}, we define a score function asS(CP,W,α,β)=m∑︁i=1exp(−αd(wi,CP))−β|n−m|,(3)whereαis the decay factor for the impact of the distance betweenthe detected change points and the actual waves, βis the penalty co-efficient that penalizes the absolute difference between the numberof detected waves and the number of actual waves, d(wi,CP)de-notes the closest distance between wave wiand the set of detectedchange points in CP. The objective is to find hyperparameters thatminimize this score:CP★=arg minα,βS(CP,W,α,β). (4)Minimizing this metric allows us to select the hyperparametersthat optimally align the detected change points with the expert-identified waves while balancing proximity and the penalty for thedifference in the number of change points and waves.4.3.2 Evaluation using Wave-based Segmentation. Our approachleverages wave-based segmentation for evaluation. Once we sepa-rate our dataset Dinto training, Dtrain, and testing sets, Dtest, we re-strict the test data to have just one segment. Mathematically, if Stestrepresents all segments in Dtest, we would ensure that |Stest|=1.This methodology mirrors real-world conditions more accurately,as predicting data of new waves often requires substantial additionalinformation. We avoid using wave-based segmentation in trainingdue to potential data leakage issues, as it commonly uses globaldata to determine segmentation, which could inadvertently affectthe results.5 EXPERIMENTSIn this section, we outline the experimental setup, including datavisualization and segmentation results, and present the empiricalresults obtained by evaluating the five models for the task of pre-dicting case counts.5.1 Experimental SetupOur experiments exclusively use publicly available data, namelywastewater data1and case count data2, count and death which areoriginally aggregated at the county or state level and therefore, poseinherent challenges due to their noisy nature. The case count dataserve as ground truth for our prediction task. Owing to variabilityin the collection of country/state-level data, we aggregate all data atthe national level and utilize the nationwide average for our analysis.Composed of wastewater data and case count data, our datasetspans from January 15, 2020, to February 15, 2023. Wastewater datais reported on a weekly basis (162 data points), while case countdata are collected daily (1128 data points). For all the experiments,we report the mean and standard deviation of 6runs.To better understand the correlation between wastewater dataand the case counts, we visualize the trends in the data in Figure 1.We aggregate the data at the national level due to the high variabil-ity and statistical noise inherent in the state-wise data, as evidencedin Figure 1(b). As shown in Figure 1 with the shifted wastewatercurve, a strong association exists between the trend of virus con-centration levels in wastewater and that of the number of cases,with wastewater data trends slightly preceding that of case counts.However, it is important to underscore that despite the exhibitedassociation between the two trends, the relationship between theirabsolute numbers is not straightforward.1https://github.com/biobotanalytics/covid19-wastewater-data2https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Aggregated trend of the nation(b) Trend in Georgia and MississippiFigure 1: Temporal Correlation between Wastewater ViralConcentrations and Case Counts per 100k population. Thex-axis shows the dates ranging from 2020-01-15 to 2023-02-15, and the y-axis denotes the values of the viral wastewaterconcentrations and the number of cases per 100k population.Subfigure (a) describes the aggregated trend of the nation,and (b) describes two randomly picked states of Georgia andMississippi.5.2 Visualization of Segmentation ResultAfter calibrating the hyperparameters on the expert-identifiedwaves from March 2020 to February 2022 [ 28], we use the BinaryChange Point algorithm [ 9] to detect the change points in thewastewater virus concentration level data. In our case, the expertdata segmentation consists of five points, forming six distinct waves.As a result, we opted to include all of these points for the calcula-tion of the score function during the calibration process. Figure 2demonstrates that the detected change points closely align withthe expert-identified waves and that our method can accuratelydetect change points even in areas not covered by the expert datasegmentation.5.3 Evaluation across Varied End DatesTo assess the accuracy of our models, we evaluate their performancethroughout the course of the pandemic. Figure 3 represents theNormalized Root Mean Square Error (NRMSE) of each model overthe different end dates, allowing for a comparative analysis of modelconsistency and adaptability across time. We compare our resultsFigure 2: Segmentation results using Binary Change PointDetection. The green dotted lines represent expert-identifiedchange points, while the red dotted lines indicate our de-tected change points. The x-axis denotes the days passedsince 2020-01-15, and the y-axis shows the viral wastewaterconcentration level. Our model’s detected change points ex-hibit close correspondence with expert-identified points.with a random forest model developed by Li et al . [13] . Their modelwas trained on diverse data, including hospitalization and ICUadmission records, CCVI indexes, and vaccination records, amongothers. Notably, their work does not clearly delineate the date rangefor the test data—a factor that could significantly impact the model’saccuracy.Figure 3 shows that the models perform relatively poorly inthe early stages of the pandemic but improve significantly in thelater stages, even during a sudden peak in early 2022. In the laterstages of the pandemic (after July 2021), as shown in Figure 3, allfive models reach performance on par with the baseline model,indicating an NRMSE below 1.0. This suggests that, on average,the model’s prediction error is less than the standard deviation ofthe observed data, which is over 200cases during the peak. Theperformance at the early stages is worse, possibly due to the lackof sufficient data to learn the inherent temporal correlation.5.4 Impact of Segmentation on EvaluationIn addition to evaluating the performance on different dates, wealso conduct an experiment to understand how wave segmentationimpacts the evaluation of our models. Figure 4 shows model perfor-mance with and without segmentation for the models. Performancedifferences are more noticeable during peak periods, likely due torapid trend shifts that make the prediction task difficult.We remark that this experiment highlights the importance ofsegmentation in this task of predicting case counts, particularlyduring volatile periods. The omission of this segmentation method,as is the case in [ 13], could lead to inaccuracies in the NormalizedRoot Mean Square Error (NRMSE) as multiple waves in the testdata may mask inaccuracies with one particular wave. Therefore,we present the results with the segmentation evaluation methodfor all subsequent experiments. It is also worth noting that theseepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei FangFigure 3: Performance comparison of models across end dates.The x-axis denotes the end date of the test period, while they-axis represents the normalized root mean square error(NRMSE) of the prediction for the number of cases. The greycurve denotes the actual number of cases. The dotted linedenotes the reported performance of the model in [13].Figure 4: Prediction accuracy comparison for each modelwith and without segmentation. The x-axis is the end date,and the y-axis is the normalized root mean square error(NRMSE) of the prediction for the number of cases. Thedotted lines denote evaluation results with segmentationperformed, and the solid lines denote evaluation withoutsegmentation.results are based on the assumption of perfect up-to-date knowledge.Results based on more relaxed assumptions are discussed in thefollowing subsection.5.5 Prediction Accuracy across VariedForecasting HorizonWe further examine our models’ prediction accuracy consideringvarying forecasting horizons (the number of days in advance whenmaking the prediction) at three distinct end dates. These datesare selected based on the previous empirical results to be repre-sentatives of the different waves. This setting mirrors the real-lifecontext where decisions are often needed to be made several daysin advance.The outcome, displayed in Figures 5(a), 5(b), and 5(c), showsan expected trend: an increased forecasting horizon generally cor-responds to decreased prediction accuracy. This trend can be at-tributed to the increased challenges introduced by longer responsetimes. However, there are instances where model accuracy improveswith an increased forecasting horizon, likely due to the inherentvariability in the data. Notably, on all three different dates, GPR andMLP models perform the best likely due to their smaller parametercount and simpler structure. Based on the results, we make the rec-ommendation that 6to12days is a good trade-off between a longerforecasting horizon and better prediction accuracy as the predictionerror generally does not increase much during this period.6 CONCLUSIONSIn this study, we explored the feasibility of utilizing publicly avail-able wastewater data to forecast the number of COVID-19 cases.We employed five representative time-series prediction methodsto capture the temporal associations within the viral wastewaterconcentration levels and case count data. Our empirical resultsshow that the resulting models performed comparably with thosetrained on a more diverse range of data sources, underscoring theviability of this approach even with restricted data access.Furthermore, our research underscores the importance of datasegmentation during evaluation to better comprehend the inherentrelationship between wastewater data and COVID-19 case count.This segmentation approach addresses the complexities posed bytesting data spanning multiple waves, which can influence modelevaluation metrics. Grounded in our empirical findings, we alsopropose practical guidelines regarding the forecasting horizon forcase count prediction.We hope that the findings of this study contribute to the growingbody of research on wastewater-based epidemiology and providevaluable insights into the challenges and potential solutions foraccurate epidemic forecasting using wastewater data, which canbe applied in real-world scenarios to improve public health surveil-lance and inform decision-making processes. We acknowledge thecomplexities introduced by evolving testing and reporting practicesduring the COVID-19 pandemic, which make it increasingly hardto acquire ground truth data, and therefore alternative metrics likemortality data may gain prominence in different stages of epidemi-ological forecasting. We also acknowledge the existence of otherpublicly accessible data sources of varying types that may be uti-lized, including reproductive number[ 12], hospitalization numbers,and mortality rates[ 10,36]. These additional data sources presentample opportunities for future research directions, broadening thescope of our current understanding and forecasting capabilities ofpublic health scenarios.Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Performance comparison w.r.t. #days to react on 2021-12-15(b) Performance comparison w.r.t. #days to react on 2022-07-03(c) Performance comparison w.r.t. #days to react on 2022-10-11Figure 5: Prediction accuracy corresponding to different leadtimes at three different dates. The x-axis indicates the fore-casting horizon, and the y-axis denotes the normalized rootmean square error (NRMSE) of the prediction of the numberof cases. The three different dates are chosen to illustrate themodels’ performance at distinct waves during the pandemic.ACKNOWLEDGMENTSZhicheng Zhang, Fei Fang, Angel Desai, and Sonja Neumeister weresupported in part by grant SES2200228 from the National ScienceFoundation. Maimuna Shahnaz Majumder was supported in part bygrant R35GM146974 from the National Institute of General MedicalSciences, National Institutes of Health. The funders had no role instudy design, data collection and analysis, decision to publish, orpreparation of the manuscript. Zhicheng Zhang was supported inpart by SCS Dean’s Fellowship.
Q9wUgksGK4
The work is somewhat significant
2: Marginally below acceptance threshold
## Clarity This paper is easy to read however I found it hard to fully understand the proposed method ## Quality The work is well-motivated but the benefits of proposed methods are not clear. ## Originality Using ML methods for the two examined datasets is original. ## Significance The work is somewhat significant. ## Pros: - Have diverse methods for modeling the time-series data. - The result for wave-based segmentation is interesting but needs more explanation. - The experiment results show better performance compared with the previous method (Random Forest) with additional data sources, however, I am concerned about the evaluation process that is not consistent between the two works. - Results show the potential of ML methods can get competitive results without additional data sources. ## Cons: - The proposed method is poorly explained - In equation (1).  $\hat{x}$ is not mentioned before, and the condition part authors compare $x_t$ with $T$ which is confusing. - “AD and MSM.” abbreviations need an explanation. - Need to introduce the role of $\alpha, \beta$ in getting the final change points - The benefit of the technique dealing with misaligned time-series data is not clear. - Authors try to deal with the distribution shift problem by applying wave-based segmentation on test data, however, segmentation removes the variety of the trend inside one segment then it seems to be easier for models to predict.
3: The reviewer is fairly confident that the evaluation is correct
rMSlLb33Gb
KDD.org/2023/Workshop/epiDAMIK
2023
A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)
["Juan Marcos Ramirez", "Sergio Diaz-Aranda", "Jose Aguilar", "Oluwasegun Ojo", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
The estimation of incidence has been a crucial component for monitoring COVID-19 dissemination. This has become challenging when official data are unavailable or insufficiently reliable. Hence, the implementation of efficient, inexpensive, and secure techniques that capture information about epidemic indicators is required. This study aims to provide a snapshot of COVID-19 incidence, hospitalizations, and mortality in different countries in January 2023. To this end, we collected data on the number of cases, deaths, vaccinations, and hospitalizations among the fifteen closest contacts to survey respondents. More precisely, indirect surveys were conducted for 100 respondents from Australia on 19 January 2023, 200 respondents from the UK on 19 January 2023, and 1,000 respondents from China between 18-26 January 2023. To assess the incidence of COVID-19, we used a modified version Network Scale-up Method (NSUM) that fixes the number of people in the contact network (reach). We have compared our estimates with official data from Australia and the UK in order to validate our approach. In the case of the vaccination rate, our approach estimates a very close value to the official data, and in the case of hospitalizations and deaths, the official results are within the confidence interval. Regarding the remaining variables, our approach overestimates the values obtained by the Our World in Data (OWID) platform but is close to the values provided by the Officer of National Statistics (ONS) in the case of the UK (within the confidence interval). In addition, Cronbach's alpha gives values that allow us to conclude that the reliability of the estimates in relation to the consistency of the answers is excellent for the UK and good for Australia. Following the same methodology, we have estimated the same metrics for different Chinese cities and provinces. It is worth noting that this approach allows quick estimates to be made with a reduced number of surveys to achieve a wide population coverage, preserving the privacy of the participants.
["COVID-19", "incidence estimation", "indirect surveys", "NSUM"]
ABSTRACTThe estimation of incidence has been a crucial component for moni-toring COVID-19 dissemination. This has become challenging whenofficial data are unavailable or insufficiently reliable. Hence, the im-plementation of efficient, inexpensive, and secure techniques thatcapture information about epidemic indicators is required. Thisstudy aims to provide a snapshot of COVID-19 incidence, hospital-izations, and mortality in different countries in January 2023. To thisend, we collected data on the number of cases, deaths, vaccinations,and hospitalizations among the fifteen closest contacts to survey re-spondents. More precisely, indirect surveys were conducted for 100respondents from Australia on 19 January 2023, 200 respondentsfrom the UK on 19 January 2023, and 1,000 respondents from Chinabetween 18-26 January 2023. To assess the incidence of COVID-19,we used a modified version Network Scale-up Method (NSUM) thatfixes the number of people in the contact network (reach). We havecompared our estimates with official data from Australia and theUK in order to validate our approach. In the case of the vaccinationrate, our approach estimates a very close value to the official data,and in the case of hospitalizations and deaths, the official results arewithin the confidence interval. Regarding the remaining variables,our approach overestimates the values obtained by the Our Worldin Data (OWID) platform but is close to the values provided by theOfficer of National Statistics (ONS) in the case of the UK (within theconfidence interval). In addition, Cronbach’s alpha gives values thatallow us to conclude that the reliability of the estimates in relationto the consistency of the answers is excellent for the UK and goodfor Australia. Following the same methodology, we have estimatedthe same metrics for different Chinese cities and provinces. It isworth noting that this approach allows quick estimates to be madewith a reduced number of surveys to achieve a wide populationcoverage, preserving the privacy of the participants.KEYWORDSCOVID-19, incidence estimation, indirect surveys, NSUM1 INTRODUCTIONTo effectively manage public health resources, monitoring infec-tious diseases such as COVID-19 requires knowledge of variousepidemic indicators, such as the number of cases, deaths, and hos-pitalizations, among others. Most of these indicators have beencollected through the use of methods that require the presenceof a substantial portion of the target population, such as antigentest screenings or hospital records. In order to overcome thesedisadvantages, several methods have used direct surveys to esti-mate indicators [ 1,2]. Unfortunately, direct surveys depend onthe participation of a large number of people to obtain reliableestimates, usually collect sensitive personal data (which may de-ter respondents due to privacy concerns), and require careful datamanipulation.An alternative to these surveys is using indirect surveys, whichask participants about the people in their contact network, ratherthan themselves. From the responses provided by indirect surveys,the estimates of different variables can be derived using NetworkScale-up Method (NSUM) [ 3,4]. As a result of this approach, 1) alarger sub-population may be reached, 2) data collection costs maybe reduced, 3) a computationally efficient method can be used toobtain estimates, and 4) participants will be assured of high levelsof privacy. Indirect surveys have already been implemented forestimating indicators during the COVID-19 pandemic [5, 6].In this work, we use indirect online surveys to capture a snapshotof cases, mortality, vaccination, and hospitalizations due to COVID-19 in China for the period of January 18-26, 2023. To this end, amodified version of the NSUM approach that fixes the number ofpeople in the contact network is used to estimate different epidemicindicators. In essence, this modified version extracts knowledgeabout epidemic indicators without resorting to additional controlquestions that usually are considered to estimate the reach (thenumber of people in the contact network). In addition, a data pre-processing stage is included, which comprises of a set consistencyfilters and a nonlinear outlier detection stage, to improve the reli-ability of the collected data. We validate our approach using datafrom Australia and the United Kingdom (UK) collected on January19, 2023. These metrics are compared with respect to the officialvalues reported by Our World in Data (OWID) and the Office forNational Statistics (ONS) from UK. In addition, we use Cronbach’salpha index [ 7], which is a reliability value to measure the internalconsistency of the questionnaire generated by indirect surveys.2 METHODS2.1 Sampling ParticipantsWe conducted online indirect surveys using the PollFish platform.Specifically, we conducted an online survey in China between Jan-uary 18-26, 2023. This online survey collected information aboutvarious COVID-19 indicators (vaccination, deaths, and number ofcases in the last month, the last 7 days, and the past 24 hours) amongthe 15 closest contacts of 1,000 participants (see SupplementaryInformation section for the English version of the survey questions).Notice that the selected number of closest contacts to respondents(15) is considered the size of the good-friends support group accord-ing to Dunbar’s theory [ 8]. This number provides us a trade-offbetween the size of the subpopulation we aim to cover (reach) andJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillothe minimization of undesired effects due to respondents such astransmission and recall errors [ 4]. Additionally, for validation, weconducted online surveys in Australia (100 responses) and the UK(200 responses) on January 19, 2023. Table 3 in Supplementary In-formation shows the characteristics of the survey respondents (theplatform provides information on gender, age group, education,and ethnicity). The respondents of each survey are also stratifiedby region. For instance, Fig. 1 in Supplementary Information showsa map of China where the intensity corresponds to the number ofquestionnaires completed in each province.2.2 Data AnalysisIn order to obtain a reliable dataset, we performed two subphasesof preprocessing: (1) an inconsistency filter, and (2) a univariateoutlier detection.(1)The inconsistency filter removes participants with inconsistentresponses: less infected contacts than fatalities, less infectedcontacts than hospitalized, less infected contacts in the lastmonth than in the last 7 days, and less infected contacts in thelast month than in the last 24 hours.(2)Since the collected variables exhibit extremely skewed distri-butions, the robust outlier detection method reported in [ 9]is applied. Based on the variable data, this method firstly es-timates the quartiles Q1andQ3, as well as the interquartilerange (IQR). Then, the whiskers QαandQβare set. Finally, thismethod preserves the samples in the interval limited by[Q1−1.5eaMCIQR;Q3+1.5ebMCIQR] (1)whereMCis the medcouple statistic that estimates the degree ofskewness of the data. Samples outside the interval are marked asoutliers and, consequently, are removed. In addition, to estimatethe parameters aandb, we consider the system [9] log23Q1−QαIQR≈aMClog23Qβ−Q3IQR≈bMC .(2)whereQαandQβare theα-th andβ-th quantiles of the distri-bution, with α=0.15andα=0.85.We consider the NSUM approach to estimate the rates of thedifferent COVID-19 indicators. In particular, NSUM is a statisticalframework for estimating hidden populations from indirect surveys.There are three main NSUM approaches: frequentist models thatestimate subpopulation rates, Bayesian models that include priors,and network models that estimate population properties [ 4]. Toestimate cumulative incidences, hospitalization rates, and mortalityrates, we modify an NSUM method belonging to the category offrequentist models based on the maximum likelihood estimation(MLE). In this regard, let cibe the number of contacts of the i-threspondent that have a particular characteristic, e.g., persons whohave been hospitalized. Further, consider rithe number of closecontacts of the i-th respondent (which in this study is fixed at ri=15, as shown in the questions in the Supplementary Information).The requirement of close contacts is introduced to minimize theeffect of the visibility bias [ 10] with respect to the classical method[3]. Hence, we estimate the aggregated rate, p, asÍici/Íiri=Íici/(15n), withnas the number of responses (samples). Theestimator’s variance is√︁p(1−p)/(15n), assuming that the ciareindependent binomial random variables with 15 trials and successprobabilityp.We evaluated the validity of our approach by comparing thedifference between the official values reported on the Our World inData (OWID)1platform and the values estimated by our approachfor Australia and the United Kingdom (see Table 1). In both coun-tries, official data were extracted between December 20, 2022, andJanuary 19, 2023. In order to determine the number of hospitalizedpersons given the hospital occupancy, the length of a hospital stayis fixed at 4 days [12, 13].Additionally, for the UK, we use the data provided by the Officefor National Statistics (ONS)2. In particular, for the number of caseswe use the daily estimates of the infected population obtainedby the Coronavirus (COVID-19) Infection Survey of the ONS. Forthe 7 days and the last month’s estimates, in order not to countmultiple times the same cases, the sum of the daily percentages isdivided by 10 days, an estimated average duration of the infectionwith Omicron [ 14]. Hospitalizations are the sum of the weeklyadmission rates with COVID-19 in England from Dec 19, 2022, toJan 22, 2023 (5 weeks). Mortality is the rate of registered deathsinvolving COVID-19 in England from Dec 17, 2022, to Jan 20, 2023.Finally, we use Cronbach’s Alpha coefficient to measure the reli-ability of the results obtained from the indirect surveys. Specifically,it quantifies the reliability of a value of an unobservable variableconstructed from the observed variables. The closer this coefficientis to its maximum value of 1, the greater the reliability of the mea-sure, but in general, it is considered that values greater than 0.7are sufficient to guarantee reliability. In this work, we computeCronbach’s Alpha coefficient based on correlations [15].3 RESULTSTable 1 displays the estimates and the 95% confidence interval forthe surveys conducted in the UK and Australia. In addition, it showsthe statistics provided by official reports. The confidence intervalis computed as p±1.96√︁p(1−p)/(15n). As can be observed, thevaccination estimates are very close to the official values: they areestimated as 76.50% (73.70% - 79.29%) and 78.86% (95% confidenceinterval: 77.00% - 80.72%) in Australia and UK, respectively, whilethe official (OWID) values are 84.95% and 79.71%. In the case ofmortality and hospitalizations in the last month, the official valuesare within the confidence interval of our estimates in the case ofAustralia. Specifically, the mortality rate is 0.34% (0.00% - 0.72%) andthe official is 0.005%, the hospitalization rate is 1.02% (0.36% - 1.68%)and the official is 0.112%. Also, in the case of the UK, the officialvalues of ONS are within the confidence interval of our estimates ofthe number of cases, new cases in the last 7 days, and cases in thelast 24 hours. Cronbach’s alpha coefficient is 0.83 for Australia and0.95 for the UK, which tells us that the reliability of the estimatesis very good. The results of the estimates and Cronbach’s alphacoefficient allow concluding that we can use the indirect surveyapproach to make estimates when official data is not available or1https://ourworldindata.org/, downloaded on July 24th, 2023. Observe that these valueshave changed from those downloaded in February 2023 [11].2https://www.ons.gov.uk/, downloaded on February 3rd, 2023.A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)Table 1: COVID-19 metrics in % (and 95% CI) obtained from indirect survey data and official reports for Australia and the UK. (1)People aged 12 years and over that have received at least one/two/three doses on Aug 31, 2022. (2) England data only, 5 weeks.Australia United KingdomIndirect Survey OWID Indirect Survey OWID ONSCases12.43 (10.26 - 14.60) 1.731 8.67 (7.39 - 9.96) 0.298 9.663(last month)Vaccination76.50 (73.70 - 79.29) 84.95 78.86 (77.00 - 80.72) 79.71 93.6/88.2/70.2(1)rateMortality0.34 (0.00 - 0.72) 0.005 0.43 (0.13 - 0.73) 0.006 0.005(2)(last month)Hospitalizations1.02 (0.36 - 1.68) 0.112 0.81 (0.40 - 1.22) 0.133 0.044(2)(last month)Cases2.03 (1.10 - 2.96) 0.118 1.30 (0.78 - 1.82) 0.037 1.458(24 hours)New cases2.71 (1.64 - 3.78) 0.118 1.30 (0.78 - 1.82) 0.023 1.116(7 days)Cronbach’s alpha 0.83 0.95Table 2: COVID-19 incidence metrics in % obtained from indirect survey data for China.SamplesCases Vaccination Mortality Hosp Cases Cases(last month) rate (last month) (last month) (24 hours) (7 days)China 46978.57 91.03 1.19 9.30 2.87 9.52(77.62-79.54) (90.36-91.70) (0.94-1.45) (8.61-9.97) (2.48-3.26) (8.83-10.21)ProvincesJiangsu 4875.56 87.92 1.67 7.64 2.64 9.44(72.42-78.69) (85.54-90.30) (0.73 - 2.60) (5.70-9.58) (1.47-3.81) (7.31-11.58)Guangdong 4580.00 86.07 0.59 5.33 3.26 6.96(76.98-83.02) (83.46-88.69) (0.01-1.17) (3.64-7.03) (1.92-4.60) (5.04-8.88)Shandong 2774.81 95.80 1.48 8.40 2.22 6.67(70.59 - 79.04) (93.85-97.76) (0.30-2.66) (5.69-11.10) (0.79-3.66) (4.24-9.10)CitiesShanghai 968.89 88.15 2.22 5.93 0.74 5.19(61.08-76.70) (82.70-93.60) (0.00-4.71) (1.94-9.91) (0.00-2.19) (1.44-8.93)Guangzhou 1181.82 86.67 1.82 9.70 4.85 7.27(75.93-87.70) (81.48-91.85) (0.00-3.86) (5.18-14.21) (1.57-8.13) (3.31-11.24)Chengdu 889.17 88.33 0.83 8.33 0.83 8.33(83.61-94.73) (82.59-94.08) (0.00-2.46) (3.39-13.28) (0.79-2.45) (3.39-13.28)Beijing 874.17 91.67 0.83 13.33 5.00 11.67(66.33-82.00) (86.72-96.61) (0.00-2.45) (7.25-19.42) (1.10-8.90) (5.92-17.41)reliable and use them considering a prudential bias when assessingthem.Table 2 shows the estimated results for China for all the questionsof the survey. While 1.000 indirect survey responses were collected,the filters specified in Section 2.2 were used, reducing drasticallythe sample size to 469. Comparing our results with the OWIDdata for China, the vaccination rate is 91.9% while we estimate91.03% (90.36%-91.7%), which is almost a perfect match. The numberof deaths reported by OWID is 0.005% while we estimate 1.19%(0.94%-1.45%), a much higher value. However, OWID warns that“the number of confirmed deaths may not accurately represent thetrue number of deaths”. Therefore, our estimate could serve asa first approximation (that may be biased). Our estimate of thenumber of cases in the last month is 78.57% (77.62%-79.54%), veryfar from 6.182% reported by OWID (which warns that “the numberof confirmed cases is lower than the true number of infections").Note that some areas of China may have a high incidence, as notedin the report published at [ 16]: “nearly 90% of Henan’s populationhad been infected by 6 January".We compute estimates for the provinces and cities with thelargest number of samples (see Table 2). The rate of vaccination andcases in the last month is similar in all of them and similar to the val-ues in China. The Guangdong province shows the lowest estimatesof hospitalizations and deaths, while it has large case estimatesamong provinces. Among cities, Beijing shows low estimates ofmonthly cases, but large rates of recent cases and hospitalizations.Unfortunately, the sample size for cities is very small. Finally, wewould like to point out that, in general, the data are relatively smallcompared to the size of the country. Additionally, as can be seenin Table 3 in Supplementary Information, the sample is biased byage and education level. These biases are reduced with the use ofindirect questions, but still more studies are needed.4 CONCLUSIONS AND FUTURE WORKThis work aims to estimate a snapshot of COVID-19 incidence,hospitalizations, and mortality from indirect surveys in China inJanuary 2023. To estimate these epidemic indicators, we used amodified version of the NSUM technique that fixes the number ofpeople in the contact network. In addition, a data pre-processingstage is included to extract a reliable set of survey samples. In futurework, we are interested in analyzing multiple data preprocessingtechniques to minimize the number of discarded samples and maxi-mize indirect survey knowledge extraction. Additional results anda more extended discussion can be found in the full version of thearticle [11].5 RESEARCH ETHICS APPROVALTo carry out this, a request was previously made before the ethicscommittee of IMDEA Network Institute, who approved it in theJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillolast quarter of 2022. Basically, the ethics committee approved thatthe study could be carried out keeping the anonymity of the re-spondents. On the other hand, the platform used for the collectionof survey information guarantees that the participants (belong tothat platform) give their consent to participate in them.6 CONFLICT OF INTEREST DISCLOSURESNone reported.7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateResearch Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.8 DATA SHARING STATEMENT:The data collected in the indirect surveys is publicly available athttps://github.com/GCGImdea/coronasurveys/tree/master/papers/2023-COVID-19-China-January.9 ACKNOWLEDGMENT:We want to thank Lin Wang for his help with the Chinese versionof the survey.
Q-Kz-5gCR64
Paper shows the efficacy of indirect surveys in the estimation of epidemic indicators in places where official figures are unreliable.
4: Good paper, accept
This paper tackles the problem of estimating snapshot Covid-19 incidence rates in locations where the official figures are believed to be unreliable. They utilize an indirect survey method to collect data from respondents which has the benefits of preserving their privacy and mitigating bias due to age or education level. They modify the Network Scale Up Method by fixing the number of close contacts in their survey. They validate their approach by estimating for the UK and Australia using the English version of the indirect survey and present results from China. I think this is well-written paper describing the methods, data collection strategy and prior related work in adequate detail. By comparing their estimates for the UK and Australia with the official figures, they show the validity of their estimates in China where the official figures might conceal the true rates of hospitalizations and mortality. The results are very interesting as they show a general agreement with the official vaccination rates while showing wide disparity in the estimates for deaths and cases. The data pre-processing steps weeds out inconsistent and/or outlier responses. This whittles down the sample size from 1000 to 469. This affects the ability to reliably estimate for cities, especially considering the population size. I was wondering if there was a way to preserve some of the inconsistent responses by making expert adjustments and how that would affect the results? Lastly, they compute the Cronbach's Alpha coefficient on the responses of the indirect surveys for the UK and Australia, which suggests that the indirect survey method is reliable. I believe the methods in this paper are well thought-out and the results are worth a close look. I await the outcome of their future work.
3: The reviewer is fairly confident that the evaluation is correct
rMSlLb33Gb
KDD.org/2023/Workshop/epiDAMIK
2023
A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)
["Juan Marcos Ramirez", "Sergio Diaz-Aranda", "Jose Aguilar", "Oluwasegun Ojo", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
The estimation of incidence has been a crucial component for monitoring COVID-19 dissemination. This has become challenging when official data are unavailable or insufficiently reliable. Hence, the implementation of efficient, inexpensive, and secure techniques that capture information about epidemic indicators is required. This study aims to provide a snapshot of COVID-19 incidence, hospitalizations, and mortality in different countries in January 2023. To this end, we collected data on the number of cases, deaths, vaccinations, and hospitalizations among the fifteen closest contacts to survey respondents. More precisely, indirect surveys were conducted for 100 respondents from Australia on 19 January 2023, 200 respondents from the UK on 19 January 2023, and 1,000 respondents from China between 18-26 January 2023. To assess the incidence of COVID-19, we used a modified version Network Scale-up Method (NSUM) that fixes the number of people in the contact network (reach). We have compared our estimates with official data from Australia and the UK in order to validate our approach. In the case of the vaccination rate, our approach estimates a very close value to the official data, and in the case of hospitalizations and deaths, the official results are within the confidence interval. Regarding the remaining variables, our approach overestimates the values obtained by the Our World in Data (OWID) platform but is close to the values provided by the Officer of National Statistics (ONS) in the case of the UK (within the confidence interval). In addition, Cronbach's alpha gives values that allow us to conclude that the reliability of the estimates in relation to the consistency of the answers is excellent for the UK and good for Australia. Following the same methodology, we have estimated the same metrics for different Chinese cities and provinces. It is worth noting that this approach allows quick estimates to be made with a reduced number of surveys to achieve a wide population coverage, preserving the privacy of the participants.
["COVID-19", "incidence estimation", "indirect surveys", "NSUM"]
ABSTRACTThe estimation of incidence has been a crucial component for moni-toring COVID-19 dissemination. This has become challenging whenofficial data are unavailable or insufficiently reliable. Hence, the im-plementation of efficient, inexpensive, and secure techniques thatcapture information about epidemic indicators is required. Thisstudy aims to provide a snapshot of COVID-19 incidence, hospital-izations, and mortality in different countries in January 2023. To thisend, we collected data on the number of cases, deaths, vaccinations,and hospitalizations among the fifteen closest contacts to survey re-spondents. More precisely, indirect surveys were conducted for 100respondents from Australia on 19 January 2023, 200 respondentsfrom the UK on 19 January 2023, and 1,000 respondents from Chinabetween 18-26 January 2023. To assess the incidence of COVID-19,we used a modified version Network Scale-up Method (NSUM) thatfixes the number of people in the contact network (reach). We havecompared our estimates with official data from Australia and theUK in order to validate our approach. In the case of the vaccinationrate, our approach estimates a very close value to the official data,and in the case of hospitalizations and deaths, the official results arewithin the confidence interval. Regarding the remaining variables,our approach overestimates the values obtained by the Our Worldin Data (OWID) platform but is close to the values provided by theOfficer of National Statistics (ONS) in the case of the UK (within theconfidence interval). In addition, Cronbach’s alpha gives values thatallow us to conclude that the reliability of the estimates in relationto the consistency of the answers is excellent for the UK and goodfor Australia. Following the same methodology, we have estimatedthe same metrics for different Chinese cities and provinces. It isworth noting that this approach allows quick estimates to be madewith a reduced number of surveys to achieve a wide populationcoverage, preserving the privacy of the participants.KEYWORDSCOVID-19, incidence estimation, indirect surveys, NSUM1 INTRODUCTIONTo effectively manage public health resources, monitoring infec-tious diseases such as COVID-19 requires knowledge of variousepidemic indicators, such as the number of cases, deaths, and hos-pitalizations, among others. Most of these indicators have beencollected through the use of methods that require the presenceof a substantial portion of the target population, such as antigentest screenings or hospital records. In order to overcome thesedisadvantages, several methods have used direct surveys to esti-mate indicators [ 1,2]. Unfortunately, direct surveys depend onthe participation of a large number of people to obtain reliableestimates, usually collect sensitive personal data (which may de-ter respondents due to privacy concerns), and require careful datamanipulation.An alternative to these surveys is using indirect surveys, whichask participants about the people in their contact network, ratherthan themselves. From the responses provided by indirect surveys,the estimates of different variables can be derived using NetworkScale-up Method (NSUM) [ 3,4]. As a result of this approach, 1) alarger sub-population may be reached, 2) data collection costs maybe reduced, 3) a computationally efficient method can be used toobtain estimates, and 4) participants will be assured of high levelsof privacy. Indirect surveys have already been implemented forestimating indicators during the COVID-19 pandemic [5, 6].In this work, we use indirect online surveys to capture a snapshotof cases, mortality, vaccination, and hospitalizations due to COVID-19 in China for the period of January 18-26, 2023. To this end, amodified version of the NSUM approach that fixes the number ofpeople in the contact network is used to estimate different epidemicindicators. In essence, this modified version extracts knowledgeabout epidemic indicators without resorting to additional controlquestions that usually are considered to estimate the reach (thenumber of people in the contact network). In addition, a data pre-processing stage is included, which comprises of a set consistencyfilters and a nonlinear outlier detection stage, to improve the reli-ability of the collected data. We validate our approach using datafrom Australia and the United Kingdom (UK) collected on January19, 2023. These metrics are compared with respect to the officialvalues reported by Our World in Data (OWID) and the Office forNational Statistics (ONS) from UK. In addition, we use Cronbach’salpha index [ 7], which is a reliability value to measure the internalconsistency of the questionnaire generated by indirect surveys.2 METHODS2.1 Sampling ParticipantsWe conducted online indirect surveys using the PollFish platform.Specifically, we conducted an online survey in China between Jan-uary 18-26, 2023. This online survey collected information aboutvarious COVID-19 indicators (vaccination, deaths, and number ofcases in the last month, the last 7 days, and the past 24 hours) amongthe 15 closest contacts of 1,000 participants (see SupplementaryInformation section for the English version of the survey questions).Notice that the selected number of closest contacts to respondents(15) is considered the size of the good-friends support group accord-ing to Dunbar’s theory [ 8]. This number provides us a trade-offbetween the size of the subpopulation we aim to cover (reach) andJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillothe minimization of undesired effects due to respondents such astransmission and recall errors [ 4]. Additionally, for validation, weconducted online surveys in Australia (100 responses) and the UK(200 responses) on January 19, 2023. Table 3 in Supplementary In-formation shows the characteristics of the survey respondents (theplatform provides information on gender, age group, education,and ethnicity). The respondents of each survey are also stratifiedby region. For instance, Fig. 1 in Supplementary Information showsa map of China where the intensity corresponds to the number ofquestionnaires completed in each province.2.2 Data AnalysisIn order to obtain a reliable dataset, we performed two subphasesof preprocessing: (1) an inconsistency filter, and (2) a univariateoutlier detection.(1)The inconsistency filter removes participants with inconsistentresponses: less infected contacts than fatalities, less infectedcontacts than hospitalized, less infected contacts in the lastmonth than in the last 7 days, and less infected contacts in thelast month than in the last 24 hours.(2)Since the collected variables exhibit extremely skewed distri-butions, the robust outlier detection method reported in [ 9]is applied. Based on the variable data, this method firstly es-timates the quartiles Q1andQ3, as well as the interquartilerange (IQR). Then, the whiskers QαandQβare set. Finally, thismethod preserves the samples in the interval limited by[Q1−1.5eaMCIQR;Q3+1.5ebMCIQR] (1)whereMCis the medcouple statistic that estimates the degree ofskewness of the data. Samples outside the interval are marked asoutliers and, consequently, are removed. In addition, to estimatethe parameters aandb, we consider the system [9] log23Q1−QαIQR≈aMClog23Qβ−Q3IQR≈bMC .(2)whereQαandQβare theα-th andβ-th quantiles of the distri-bution, with α=0.15andα=0.85.We consider the NSUM approach to estimate the rates of thedifferent COVID-19 indicators. In particular, NSUM is a statisticalframework for estimating hidden populations from indirect surveys.There are three main NSUM approaches: frequentist models thatestimate subpopulation rates, Bayesian models that include priors,and network models that estimate population properties [ 4]. Toestimate cumulative incidences, hospitalization rates, and mortalityrates, we modify an NSUM method belonging to the category offrequentist models based on the maximum likelihood estimation(MLE). In this regard, let cibe the number of contacts of the i-threspondent that have a particular characteristic, e.g., persons whohave been hospitalized. Further, consider rithe number of closecontacts of the i-th respondent (which in this study is fixed at ri=15, as shown in the questions in the Supplementary Information).The requirement of close contacts is introduced to minimize theeffect of the visibility bias [ 10] with respect to the classical method[3]. Hence, we estimate the aggregated rate, p, asÍici/Íiri=Íici/(15n), withnas the number of responses (samples). Theestimator’s variance is√︁p(1−p)/(15n), assuming that the ciareindependent binomial random variables with 15 trials and successprobabilityp.We evaluated the validity of our approach by comparing thedifference between the official values reported on the Our World inData (OWID)1platform and the values estimated by our approachfor Australia and the United Kingdom (see Table 1). In both coun-tries, official data were extracted between December 20, 2022, andJanuary 19, 2023. In order to determine the number of hospitalizedpersons given the hospital occupancy, the length of a hospital stayis fixed at 4 days [12, 13].Additionally, for the UK, we use the data provided by the Officefor National Statistics (ONS)2. In particular, for the number of caseswe use the daily estimates of the infected population obtainedby the Coronavirus (COVID-19) Infection Survey of the ONS. Forthe 7 days and the last month’s estimates, in order not to countmultiple times the same cases, the sum of the daily percentages isdivided by 10 days, an estimated average duration of the infectionwith Omicron [ 14]. Hospitalizations are the sum of the weeklyadmission rates with COVID-19 in England from Dec 19, 2022, toJan 22, 2023 (5 weeks). Mortality is the rate of registered deathsinvolving COVID-19 in England from Dec 17, 2022, to Jan 20, 2023.Finally, we use Cronbach’s Alpha coefficient to measure the reli-ability of the results obtained from the indirect surveys. Specifically,it quantifies the reliability of a value of an unobservable variableconstructed from the observed variables. The closer this coefficientis to its maximum value of 1, the greater the reliability of the mea-sure, but in general, it is considered that values greater than 0.7are sufficient to guarantee reliability. In this work, we computeCronbach’s Alpha coefficient based on correlations [15].3 RESULTSTable 1 displays the estimates and the 95% confidence interval forthe surveys conducted in the UK and Australia. In addition, it showsthe statistics provided by official reports. The confidence intervalis computed as p±1.96√︁p(1−p)/(15n). As can be observed, thevaccination estimates are very close to the official values: they areestimated as 76.50% (73.70% - 79.29%) and 78.86% (95% confidenceinterval: 77.00% - 80.72%) in Australia and UK, respectively, whilethe official (OWID) values are 84.95% and 79.71%. In the case ofmortality and hospitalizations in the last month, the official valuesare within the confidence interval of our estimates in the case ofAustralia. Specifically, the mortality rate is 0.34% (0.00% - 0.72%) andthe official is 0.005%, the hospitalization rate is 1.02% (0.36% - 1.68%)and the official is 0.112%. Also, in the case of the UK, the officialvalues of ONS are within the confidence interval of our estimates ofthe number of cases, new cases in the last 7 days, and cases in thelast 24 hours. Cronbach’s alpha coefficient is 0.83 for Australia and0.95 for the UK, which tells us that the reliability of the estimatesis very good. The results of the estimates and Cronbach’s alphacoefficient allow concluding that we can use the indirect surveyapproach to make estimates when official data is not available or1https://ourworldindata.org/, downloaded on July 24th, 2023. Observe that these valueshave changed from those downloaded in February 2023 [11].2https://www.ons.gov.uk/, downloaded on February 3rd, 2023.A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)Table 1: COVID-19 metrics in % (and 95% CI) obtained from indirect survey data and official reports for Australia and the UK. (1)People aged 12 years and over that have received at least one/two/three doses on Aug 31, 2022. (2) England data only, 5 weeks.Australia United KingdomIndirect Survey OWID Indirect Survey OWID ONSCases12.43 (10.26 - 14.60) 1.731 8.67 (7.39 - 9.96) 0.298 9.663(last month)Vaccination76.50 (73.70 - 79.29) 84.95 78.86 (77.00 - 80.72) 79.71 93.6/88.2/70.2(1)rateMortality0.34 (0.00 - 0.72) 0.005 0.43 (0.13 - 0.73) 0.006 0.005(2)(last month)Hospitalizations1.02 (0.36 - 1.68) 0.112 0.81 (0.40 - 1.22) 0.133 0.044(2)(last month)Cases2.03 (1.10 - 2.96) 0.118 1.30 (0.78 - 1.82) 0.037 1.458(24 hours)New cases2.71 (1.64 - 3.78) 0.118 1.30 (0.78 - 1.82) 0.023 1.116(7 days)Cronbach’s alpha 0.83 0.95Table 2: COVID-19 incidence metrics in % obtained from indirect survey data for China.SamplesCases Vaccination Mortality Hosp Cases Cases(last month) rate (last month) (last month) (24 hours) (7 days)China 46978.57 91.03 1.19 9.30 2.87 9.52(77.62-79.54) (90.36-91.70) (0.94-1.45) (8.61-9.97) (2.48-3.26) (8.83-10.21)ProvincesJiangsu 4875.56 87.92 1.67 7.64 2.64 9.44(72.42-78.69) (85.54-90.30) (0.73 - 2.60) (5.70-9.58) (1.47-3.81) (7.31-11.58)Guangdong 4580.00 86.07 0.59 5.33 3.26 6.96(76.98-83.02) (83.46-88.69) (0.01-1.17) (3.64-7.03) (1.92-4.60) (5.04-8.88)Shandong 2774.81 95.80 1.48 8.40 2.22 6.67(70.59 - 79.04) (93.85-97.76) (0.30-2.66) (5.69-11.10) (0.79-3.66) (4.24-9.10)CitiesShanghai 968.89 88.15 2.22 5.93 0.74 5.19(61.08-76.70) (82.70-93.60) (0.00-4.71) (1.94-9.91) (0.00-2.19) (1.44-8.93)Guangzhou 1181.82 86.67 1.82 9.70 4.85 7.27(75.93-87.70) (81.48-91.85) (0.00-3.86) (5.18-14.21) (1.57-8.13) (3.31-11.24)Chengdu 889.17 88.33 0.83 8.33 0.83 8.33(83.61-94.73) (82.59-94.08) (0.00-2.46) (3.39-13.28) (0.79-2.45) (3.39-13.28)Beijing 874.17 91.67 0.83 13.33 5.00 11.67(66.33-82.00) (86.72-96.61) (0.00-2.45) (7.25-19.42) (1.10-8.90) (5.92-17.41)reliable and use them considering a prudential bias when assessingthem.Table 2 shows the estimated results for China for all the questionsof the survey. While 1.000 indirect survey responses were collected,the filters specified in Section 2.2 were used, reducing drasticallythe sample size to 469. Comparing our results with the OWIDdata for China, the vaccination rate is 91.9% while we estimate91.03% (90.36%-91.7%), which is almost a perfect match. The numberof deaths reported by OWID is 0.005% while we estimate 1.19%(0.94%-1.45%), a much higher value. However, OWID warns that“the number of confirmed deaths may not accurately represent thetrue number of deaths”. Therefore, our estimate could serve asa first approximation (that may be biased). Our estimate of thenumber of cases in the last month is 78.57% (77.62%-79.54%), veryfar from 6.182% reported by OWID (which warns that “the numberof confirmed cases is lower than the true number of infections").Note that some areas of China may have a high incidence, as notedin the report published at [ 16]: “nearly 90% of Henan’s populationhad been infected by 6 January".We compute estimates for the provinces and cities with thelargest number of samples (see Table 2). The rate of vaccination andcases in the last month is similar in all of them and similar to the val-ues in China. The Guangdong province shows the lowest estimatesof hospitalizations and deaths, while it has large case estimatesamong provinces. Among cities, Beijing shows low estimates ofmonthly cases, but large rates of recent cases and hospitalizations.Unfortunately, the sample size for cities is very small. Finally, wewould like to point out that, in general, the data are relatively smallcompared to the size of the country. Additionally, as can be seenin Table 3 in Supplementary Information, the sample is biased byage and education level. These biases are reduced with the use ofindirect questions, but still more studies are needed.4 CONCLUSIONS AND FUTURE WORKThis work aims to estimate a snapshot of COVID-19 incidence,hospitalizations, and mortality from indirect surveys in China inJanuary 2023. To estimate these epidemic indicators, we used amodified version of the NSUM technique that fixes the number ofpeople in the contact network. In addition, a data pre-processingstage is included to extract a reliable set of survey samples. In futurework, we are interested in analyzing multiple data preprocessingtechniques to minimize the number of discarded samples and maxi-mize indirect survey knowledge extraction. Additional results anda more extended discussion can be found in the full version of thearticle [11].5 RESEARCH ETHICS APPROVALTo carry out this, a request was previously made before the ethicscommittee of IMDEA Network Institute, who approved it in theJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillolast quarter of 2022. Basically, the ethics committee approved thatthe study could be carried out keeping the anonymity of the re-spondents. On the other hand, the platform used for the collectionof survey information guarantees that the participants (belong tothat platform) give their consent to participate in them.6 CONFLICT OF INTEREST DISCLOSURESNone reported.7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateResearch Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.8 DATA SHARING STATEMENT:The data collected in the indirect surveys is publicly available athttps://github.com/GCGImdea/coronasurveys/tree/master/papers/2023-COVID-19-China-January.9 ACKNOWLEDGMENT:We want to thank Lin Wang for his help with the Chinese versionof the survey.
i7lQlnCLyYO
Paper provides a succinct and accessible method to estimate disease incidence
4: Good paper, accept
### Summary This paper seeks to improve disease incidence estimation methods using information from surveys about contacts, rather than the respondents direct experience. They can obtain much more information by asking about multiple individuals the respondent knows about rather than gather information about only one individual per survey. From this information, they use a modified network scale up method to determine estimated incidence for Australia, the UK, and China and use Cronbach’s alpha to verify the reliability of their data. In addition, they thoroughly clean the data they obtain in order to get a better estimate. ### Strengths - They compare with a range of locations for validation rather than relying on only one. - Their data-preprocessing and estimation methods are clear and well-explained. ### Weaknesses - The authors do not discuss how the differences in region of respondents affects the estimates in other regions. For example, how do estimates based on the regions with many respondents perform for regions with a much lower response rate? - They do not discuss the impact of sample size. Can a study be performed where the sample size is discussed in the context of confidence and estimate performance? It may not be viable to study, but are there hypotheses on when the sample size is too large (i.e., the sets of 15 contacts begin to overlap resulting in over-counting)? ### Suggestions - The sentence “…and hospitalizations among 15 of the closest contacts” could use a bit more elaboration, such as “…closest contacts to survey respondents”. - What are the results if the data was not pre-processed? ### Minor - What is n on line 177? - The writing is imprecise in some places such as line 206, 224
4: The reviewer is confident but not absolutely certain that the evaluation is correct
rMSlLb33Gb
KDD.org/2023/Workshop/epiDAMIK
2023
A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)
["Juan Marcos Ramirez", "Sergio Diaz-Aranda", "Jose Aguilar", "Oluwasegun Ojo", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
The estimation of incidence has been a crucial component for monitoring COVID-19 dissemination. This has become challenging when official data are unavailable or insufficiently reliable. Hence, the implementation of efficient, inexpensive, and secure techniques that capture information about epidemic indicators is required. This study aims to provide a snapshot of COVID-19 incidence, hospitalizations, and mortality in different countries in January 2023. To this end, we collected data on the number of cases, deaths, vaccinations, and hospitalizations among the fifteen closest contacts to survey respondents. More precisely, indirect surveys were conducted for 100 respondents from Australia on 19 January 2023, 200 respondents from the UK on 19 January 2023, and 1,000 respondents from China between 18-26 January 2023. To assess the incidence of COVID-19, we used a modified version Network Scale-up Method (NSUM) that fixes the number of people in the contact network (reach). We have compared our estimates with official data from Australia and the UK in order to validate our approach. In the case of the vaccination rate, our approach estimates a very close value to the official data, and in the case of hospitalizations and deaths, the official results are within the confidence interval. Regarding the remaining variables, our approach overestimates the values obtained by the Our World in Data (OWID) platform but is close to the values provided by the Officer of National Statistics (ONS) in the case of the UK (within the confidence interval). In addition, Cronbach's alpha gives values that allow us to conclude that the reliability of the estimates in relation to the consistency of the answers is excellent for the UK and good for Australia. Following the same methodology, we have estimated the same metrics for different Chinese cities and provinces. It is worth noting that this approach allows quick estimates to be made with a reduced number of surveys to achieve a wide population coverage, preserving the privacy of the participants.
["COVID-19", "incidence estimation", "indirect surveys", "NSUM"]
ABSTRACTThe estimation of incidence has been a crucial component for moni-toring COVID-19 dissemination. This has become challenging whenofficial data are unavailable or insufficiently reliable. Hence, the im-plementation of efficient, inexpensive, and secure techniques thatcapture information about epidemic indicators is required. Thisstudy aims to provide a snapshot of COVID-19 incidence, hospital-izations, and mortality in different countries in January 2023. To thisend, we collected data on the number of cases, deaths, vaccinations,and hospitalizations among the fifteen closest contacts to survey re-spondents. More precisely, indirect surveys were conducted for 100respondents from Australia on 19 January 2023, 200 respondentsfrom the UK on 19 January 2023, and 1,000 respondents from Chinabetween 18-26 January 2023. To assess the incidence of COVID-19,we used a modified version Network Scale-up Method (NSUM) thatfixes the number of people in the contact network (reach). We havecompared our estimates with official data from Australia and theUK in order to validate our approach. In the case of the vaccinationrate, our approach estimates a very close value to the official data,and in the case of hospitalizations and deaths, the official results arewithin the confidence interval. Regarding the remaining variables,our approach overestimates the values obtained by the Our Worldin Data (OWID) platform but is close to the values provided by theOfficer of National Statistics (ONS) in the case of the UK (within theconfidence interval). In addition, Cronbach’s alpha gives values thatallow us to conclude that the reliability of the estimates in relationto the consistency of the answers is excellent for the UK and goodfor Australia. Following the same methodology, we have estimatedthe same metrics for different Chinese cities and provinces. It isworth noting that this approach allows quick estimates to be madewith a reduced number of surveys to achieve a wide populationcoverage, preserving the privacy of the participants.KEYWORDSCOVID-19, incidence estimation, indirect surveys, NSUM1 INTRODUCTIONTo effectively manage public health resources, monitoring infec-tious diseases such as COVID-19 requires knowledge of variousepidemic indicators, such as the number of cases, deaths, and hos-pitalizations, among others. Most of these indicators have beencollected through the use of methods that require the presenceof a substantial portion of the target population, such as antigentest screenings or hospital records. In order to overcome thesedisadvantages, several methods have used direct surveys to esti-mate indicators [ 1,2]. Unfortunately, direct surveys depend onthe participation of a large number of people to obtain reliableestimates, usually collect sensitive personal data (which may de-ter respondents due to privacy concerns), and require careful datamanipulation.An alternative to these surveys is using indirect surveys, whichask participants about the people in their contact network, ratherthan themselves. From the responses provided by indirect surveys,the estimates of different variables can be derived using NetworkScale-up Method (NSUM) [ 3,4]. As a result of this approach, 1) alarger sub-population may be reached, 2) data collection costs maybe reduced, 3) a computationally efficient method can be used toobtain estimates, and 4) participants will be assured of high levelsof privacy. Indirect surveys have already been implemented forestimating indicators during the COVID-19 pandemic [5, 6].In this work, we use indirect online surveys to capture a snapshotof cases, mortality, vaccination, and hospitalizations due to COVID-19 in China for the period of January 18-26, 2023. To this end, amodified version of the NSUM approach that fixes the number ofpeople in the contact network is used to estimate different epidemicindicators. In essence, this modified version extracts knowledgeabout epidemic indicators without resorting to additional controlquestions that usually are considered to estimate the reach (thenumber of people in the contact network). In addition, a data pre-processing stage is included, which comprises of a set consistencyfilters and a nonlinear outlier detection stage, to improve the reli-ability of the collected data. We validate our approach using datafrom Australia and the United Kingdom (UK) collected on January19, 2023. These metrics are compared with respect to the officialvalues reported by Our World in Data (OWID) and the Office forNational Statistics (ONS) from UK. In addition, we use Cronbach’salpha index [ 7], which is a reliability value to measure the internalconsistency of the questionnaire generated by indirect surveys.2 METHODS2.1 Sampling ParticipantsWe conducted online indirect surveys using the PollFish platform.Specifically, we conducted an online survey in China between Jan-uary 18-26, 2023. This online survey collected information aboutvarious COVID-19 indicators (vaccination, deaths, and number ofcases in the last month, the last 7 days, and the past 24 hours) amongthe 15 closest contacts of 1,000 participants (see SupplementaryInformation section for the English version of the survey questions).Notice that the selected number of closest contacts to respondents(15) is considered the size of the good-friends support group accord-ing to Dunbar’s theory [ 8]. This number provides us a trade-offbetween the size of the subpopulation we aim to cover (reach) andJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillothe minimization of undesired effects due to respondents such astransmission and recall errors [ 4]. Additionally, for validation, weconducted online surveys in Australia (100 responses) and the UK(200 responses) on January 19, 2023. Table 3 in Supplementary In-formation shows the characteristics of the survey respondents (theplatform provides information on gender, age group, education,and ethnicity). The respondents of each survey are also stratifiedby region. For instance, Fig. 1 in Supplementary Information showsa map of China where the intensity corresponds to the number ofquestionnaires completed in each province.2.2 Data AnalysisIn order to obtain a reliable dataset, we performed two subphasesof preprocessing: (1) an inconsistency filter, and (2) a univariateoutlier detection.(1)The inconsistency filter removes participants with inconsistentresponses: less infected contacts than fatalities, less infectedcontacts than hospitalized, less infected contacts in the lastmonth than in the last 7 days, and less infected contacts in thelast month than in the last 24 hours.(2)Since the collected variables exhibit extremely skewed distri-butions, the robust outlier detection method reported in [ 9]is applied. Based on the variable data, this method firstly es-timates the quartiles Q1andQ3, as well as the interquartilerange (IQR). Then, the whiskers QαandQβare set. Finally, thismethod preserves the samples in the interval limited by[Q1−1.5eaMCIQR;Q3+1.5ebMCIQR] (1)whereMCis the medcouple statistic that estimates the degree ofskewness of the data. Samples outside the interval are marked asoutliers and, consequently, are removed. In addition, to estimatethe parameters aandb, we consider the system [9] log23Q1−QαIQR≈aMClog23Qβ−Q3IQR≈bMC .(2)whereQαandQβare theα-th andβ-th quantiles of the distri-bution, with α=0.15andα=0.85.We consider the NSUM approach to estimate the rates of thedifferent COVID-19 indicators. In particular, NSUM is a statisticalframework for estimating hidden populations from indirect surveys.There are three main NSUM approaches: frequentist models thatestimate subpopulation rates, Bayesian models that include priors,and network models that estimate population properties [ 4]. Toestimate cumulative incidences, hospitalization rates, and mortalityrates, we modify an NSUM method belonging to the category offrequentist models based on the maximum likelihood estimation(MLE). In this regard, let cibe the number of contacts of the i-threspondent that have a particular characteristic, e.g., persons whohave been hospitalized. Further, consider rithe number of closecontacts of the i-th respondent (which in this study is fixed at ri=15, as shown in the questions in the Supplementary Information).The requirement of close contacts is introduced to minimize theeffect of the visibility bias [ 10] with respect to the classical method[3]. Hence, we estimate the aggregated rate, p, asÍici/Íiri=Íici/(15n), withnas the number of responses (samples). Theestimator’s variance is√︁p(1−p)/(15n), assuming that the ciareindependent binomial random variables with 15 trials and successprobabilityp.We evaluated the validity of our approach by comparing thedifference between the official values reported on the Our World inData (OWID)1platform and the values estimated by our approachfor Australia and the United Kingdom (see Table 1). In both coun-tries, official data were extracted between December 20, 2022, andJanuary 19, 2023. In order to determine the number of hospitalizedpersons given the hospital occupancy, the length of a hospital stayis fixed at 4 days [12, 13].Additionally, for the UK, we use the data provided by the Officefor National Statistics (ONS)2. In particular, for the number of caseswe use the daily estimates of the infected population obtainedby the Coronavirus (COVID-19) Infection Survey of the ONS. Forthe 7 days and the last month’s estimates, in order not to countmultiple times the same cases, the sum of the daily percentages isdivided by 10 days, an estimated average duration of the infectionwith Omicron [ 14]. Hospitalizations are the sum of the weeklyadmission rates with COVID-19 in England from Dec 19, 2022, toJan 22, 2023 (5 weeks). Mortality is the rate of registered deathsinvolving COVID-19 in England from Dec 17, 2022, to Jan 20, 2023.Finally, we use Cronbach’s Alpha coefficient to measure the reli-ability of the results obtained from the indirect surveys. Specifically,it quantifies the reliability of a value of an unobservable variableconstructed from the observed variables. The closer this coefficientis to its maximum value of 1, the greater the reliability of the mea-sure, but in general, it is considered that values greater than 0.7are sufficient to guarantee reliability. In this work, we computeCronbach’s Alpha coefficient based on correlations [15].3 RESULTSTable 1 displays the estimates and the 95% confidence interval forthe surveys conducted in the UK and Australia. In addition, it showsthe statistics provided by official reports. The confidence intervalis computed as p±1.96√︁p(1−p)/(15n). As can be observed, thevaccination estimates are very close to the official values: they areestimated as 76.50% (73.70% - 79.29%) and 78.86% (95% confidenceinterval: 77.00% - 80.72%) in Australia and UK, respectively, whilethe official (OWID) values are 84.95% and 79.71%. In the case ofmortality and hospitalizations in the last month, the official valuesare within the confidence interval of our estimates in the case ofAustralia. Specifically, the mortality rate is 0.34% (0.00% - 0.72%) andthe official is 0.005%, the hospitalization rate is 1.02% (0.36% - 1.68%)and the official is 0.112%. Also, in the case of the UK, the officialvalues of ONS are within the confidence interval of our estimates ofthe number of cases, new cases in the last 7 days, and cases in thelast 24 hours. Cronbach’s alpha coefficient is 0.83 for Australia and0.95 for the UK, which tells us that the reliability of the estimatesis very good. The results of the estimates and Cronbach’s alphacoefficient allow concluding that we can use the indirect surveyapproach to make estimates when official data is not available or1https://ourworldindata.org/, downloaded on July 24th, 2023. Observe that these valueshave changed from those downloaded in February 2023 [11].2https://www.ons.gov.uk/, downloaded on February 3rd, 2023.A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)Table 1: COVID-19 metrics in % (and 95% CI) obtained from indirect survey data and official reports for Australia and the UK. (1)People aged 12 years and over that have received at least one/two/three doses on Aug 31, 2022. (2) England data only, 5 weeks.Australia United KingdomIndirect Survey OWID Indirect Survey OWID ONSCases12.43 (10.26 - 14.60) 1.731 8.67 (7.39 - 9.96) 0.298 9.663(last month)Vaccination76.50 (73.70 - 79.29) 84.95 78.86 (77.00 - 80.72) 79.71 93.6/88.2/70.2(1)rateMortality0.34 (0.00 - 0.72) 0.005 0.43 (0.13 - 0.73) 0.006 0.005(2)(last month)Hospitalizations1.02 (0.36 - 1.68) 0.112 0.81 (0.40 - 1.22) 0.133 0.044(2)(last month)Cases2.03 (1.10 - 2.96) 0.118 1.30 (0.78 - 1.82) 0.037 1.458(24 hours)New cases2.71 (1.64 - 3.78) 0.118 1.30 (0.78 - 1.82) 0.023 1.116(7 days)Cronbach’s alpha 0.83 0.95Table 2: COVID-19 incidence metrics in % obtained from indirect survey data for China.SamplesCases Vaccination Mortality Hosp Cases Cases(last month) rate (last month) (last month) (24 hours) (7 days)China 46978.57 91.03 1.19 9.30 2.87 9.52(77.62-79.54) (90.36-91.70) (0.94-1.45) (8.61-9.97) (2.48-3.26) (8.83-10.21)ProvincesJiangsu 4875.56 87.92 1.67 7.64 2.64 9.44(72.42-78.69) (85.54-90.30) (0.73 - 2.60) (5.70-9.58) (1.47-3.81) (7.31-11.58)Guangdong 4580.00 86.07 0.59 5.33 3.26 6.96(76.98-83.02) (83.46-88.69) (0.01-1.17) (3.64-7.03) (1.92-4.60) (5.04-8.88)Shandong 2774.81 95.80 1.48 8.40 2.22 6.67(70.59 - 79.04) (93.85-97.76) (0.30-2.66) (5.69-11.10) (0.79-3.66) (4.24-9.10)CitiesShanghai 968.89 88.15 2.22 5.93 0.74 5.19(61.08-76.70) (82.70-93.60) (0.00-4.71) (1.94-9.91) (0.00-2.19) (1.44-8.93)Guangzhou 1181.82 86.67 1.82 9.70 4.85 7.27(75.93-87.70) (81.48-91.85) (0.00-3.86) (5.18-14.21) (1.57-8.13) (3.31-11.24)Chengdu 889.17 88.33 0.83 8.33 0.83 8.33(83.61-94.73) (82.59-94.08) (0.00-2.46) (3.39-13.28) (0.79-2.45) (3.39-13.28)Beijing 874.17 91.67 0.83 13.33 5.00 11.67(66.33-82.00) (86.72-96.61) (0.00-2.45) (7.25-19.42) (1.10-8.90) (5.92-17.41)reliable and use them considering a prudential bias when assessingthem.Table 2 shows the estimated results for China for all the questionsof the survey. While 1.000 indirect survey responses were collected,the filters specified in Section 2.2 were used, reducing drasticallythe sample size to 469. Comparing our results with the OWIDdata for China, the vaccination rate is 91.9% while we estimate91.03% (90.36%-91.7%), which is almost a perfect match. The numberof deaths reported by OWID is 0.005% while we estimate 1.19%(0.94%-1.45%), a much higher value. However, OWID warns that“the number of confirmed deaths may not accurately represent thetrue number of deaths”. Therefore, our estimate could serve asa first approximation (that may be biased). Our estimate of thenumber of cases in the last month is 78.57% (77.62%-79.54%), veryfar from 6.182% reported by OWID (which warns that “the numberof confirmed cases is lower than the true number of infections").Note that some areas of China may have a high incidence, as notedin the report published at [ 16]: “nearly 90% of Henan’s populationhad been infected by 6 January".We compute estimates for the provinces and cities with thelargest number of samples (see Table 2). The rate of vaccination andcases in the last month is similar in all of them and similar to the val-ues in China. The Guangdong province shows the lowest estimatesof hospitalizations and deaths, while it has large case estimatesamong provinces. Among cities, Beijing shows low estimates ofmonthly cases, but large rates of recent cases and hospitalizations.Unfortunately, the sample size for cities is very small. Finally, wewould like to point out that, in general, the data are relatively smallcompared to the size of the country. Additionally, as can be seenin Table 3 in Supplementary Information, the sample is biased byage and education level. These biases are reduced with the use ofindirect questions, but still more studies are needed.4 CONCLUSIONS AND FUTURE WORKThis work aims to estimate a snapshot of COVID-19 incidence,hospitalizations, and mortality from indirect surveys in China inJanuary 2023. To estimate these epidemic indicators, we used amodified version of the NSUM technique that fixes the number ofpeople in the contact network. In addition, a data pre-processingstage is included to extract a reliable set of survey samples. In futurework, we are interested in analyzing multiple data preprocessingtechniques to minimize the number of discarded samples and maxi-mize indirect survey knowledge extraction. Additional results anda more extended discussion can be found in the full version of thearticle [11].5 RESEARCH ETHICS APPROVALTo carry out this, a request was previously made before the ethicscommittee of IMDEA Network Institute, who approved it in theJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillolast quarter of 2022. Basically, the ethics committee approved thatthe study could be carried out keeping the anonymity of the re-spondents. On the other hand, the platform used for the collectionof survey information guarantees that the participants (belong tothat platform) give their consent to participate in them.6 CONFLICT OF INTEREST DISCLOSURESNone reported.7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateResearch Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.8 DATA SHARING STATEMENT:The data collected in the indirect surveys is publicly available athttps://github.com/GCGImdea/coronasurveys/tree/master/papers/2023-COVID-19-China-January.9 ACKNOWLEDGMENT:We want to thank Lin Wang for his help with the Chinese versionof the survey.
bXQdGqYlDN
Review of a Paper on Estimating COVID-19 Snapshots: Strong Results, Need for Comparisons, and Requirement for Further Elaboration
2: Marginally below acceptance threshold
Quality: The quality of the paper is good overall. The authors present an approach to estimating COVID-19 snapshots using a modified Network Scale-up Method (NSUM) and validate their estimates against official data. The data preprocessing stage helps enhance the reliability of the collected data, and the privacy preservation aspect adds value to the study. However, there are some limitations, such as the lack of comparisons with other estimation methods or its limited generalizability. Clarity: The paper is generally well-written and presents the information in a clear manner but does have a few typos and items which could have been explained some more. The introduction provides adequate background information about the need for indirect survey methods and the challenges associated with official COVID-19 data. The methodology section explains the data preprocessing techniques well, but could benefit from further clarification. For example, the NSUM technique is only cited but not explained anywhere, and also the choice of setting ri=15 is not justified (why not 5 or 10?). Originality: The paper cites that the use of indirect surveys to estimate different variables using NSUM is not something new, it also cites that this has also been done for estimating different indicators during the COVID-19 pandemic. Significance: The significance of this work lies in its potential to provide valuable insights into COVID-19 indicators, especially in settings where official data is limited or unreliable. The indirect survey method offers a practical solution to estimate important epidemiological information, which can aid decision makers and researchers in understanding the spread of the disease and acting accordingly. The paper's comparison with official data and validation of estimates add credibility to its findings, further highlighting its significance. Pros: - Justification for Indirect Surveys: The paper provides a strong rationale for using indirect surveys, highlighting privacy preservation and other benefits. - Validated and Discussed Results: The paper presents well-validated results and provides a comprehensive discussion of the findings. Use of Cronbach's Alpha Coefficient: The paper employs Cronbach's alpha coefficient, a reliable measure of internal consistency, enhancing the robustness of the analysis. - Acknowledgment of Sample Size Limitation: The paper recognizes the limitation of the sample size and discusses its potential impact on the accuracy and generalizability of the estimates. - Data Preprocessing Stage: The paper includes a well-described data preprocessing stage, which enhances the reliability and quality of the collected data. Cons: - Need for Comparison to Validate Modifications and NSUM Choice: The paper should include a comparison with other methods to validate the modifications made and the selection of the Network Scale-up Method (NSUM). - Insufficient Elaboration on NSUM and Choice of "ri": The paper should provide more explanation and elaboration on NSUM and the selection of "ri" to improve reader understanding. - Limited Generalizability: The study's focus on a specific time period and a restricted set of countries (China, Australia, and the UK) limits the generalizability of the results to other countries and different time periods. - Few typos: 1- Typo in mortality rate, should be 0.72 not 0.22 based on table (line 218 right column). 2- Variable naming is either not consistent or not explained sufficiently in equations 1 and 2, would be good to clarify here what the "a", "b", "alpha", and "beta" variables represent In summary, this paper presents a good approach to estimate COVID-19 indicators using the Network Scale-up Method. While it has strong results, there are also limitations to consider. Further elaboration could address a lot of these limitations.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
rMSlLb33Gb
KDD.org/2023/Workshop/epiDAMIK
2023
A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)
["Juan Marcos Ramirez", "Sergio Diaz-Aranda", "Jose Aguilar", "Oluwasegun Ojo", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
The estimation of incidence has been a crucial component for monitoring COVID-19 dissemination. This has become challenging when official data are unavailable or insufficiently reliable. Hence, the implementation of efficient, inexpensive, and secure techniques that capture information about epidemic indicators is required. This study aims to provide a snapshot of COVID-19 incidence, hospitalizations, and mortality in different countries in January 2023. To this end, we collected data on the number of cases, deaths, vaccinations, and hospitalizations among the fifteen closest contacts to survey respondents. More precisely, indirect surveys were conducted for 100 respondents from Australia on 19 January 2023, 200 respondents from the UK on 19 January 2023, and 1,000 respondents from China between 18-26 January 2023. To assess the incidence of COVID-19, we used a modified version Network Scale-up Method (NSUM) that fixes the number of people in the contact network (reach). We have compared our estimates with official data from Australia and the UK in order to validate our approach. In the case of the vaccination rate, our approach estimates a very close value to the official data, and in the case of hospitalizations and deaths, the official results are within the confidence interval. Regarding the remaining variables, our approach overestimates the values obtained by the Our World in Data (OWID) platform but is close to the values provided by the Officer of National Statistics (ONS) in the case of the UK (within the confidence interval). In addition, Cronbach's alpha gives values that allow us to conclude that the reliability of the estimates in relation to the consistency of the answers is excellent for the UK and good for Australia. Following the same methodology, we have estimated the same metrics for different Chinese cities and provinces. It is worth noting that this approach allows quick estimates to be made with a reduced number of surveys to achieve a wide population coverage, preserving the privacy of the participants.
["COVID-19", "incidence estimation", "indirect surveys", "NSUM"]
ABSTRACTThe estimation of incidence has been a crucial component for moni-toring COVID-19 dissemination. This has become challenging whenofficial data are unavailable or insufficiently reliable. Hence, the im-plementation of efficient, inexpensive, and secure techniques thatcapture information about epidemic indicators is required. Thisstudy aims to provide a snapshot of COVID-19 incidence, hospital-izations, and mortality in different countries in January 2023. To thisend, we collected data on the number of cases, deaths, vaccinations,and hospitalizations among the fifteen closest contacts to survey re-spondents. More precisely, indirect surveys were conducted for 100respondents from Australia on 19 January 2023, 200 respondentsfrom the UK on 19 January 2023, and 1,000 respondents from Chinabetween 18-26 January 2023. To assess the incidence of COVID-19,we used a modified version Network Scale-up Method (NSUM) thatfixes the number of people in the contact network (reach). We havecompared our estimates with official data from Australia and theUK in order to validate our approach. In the case of the vaccinationrate, our approach estimates a very close value to the official data,and in the case of hospitalizations and deaths, the official results arewithin the confidence interval. Regarding the remaining variables,our approach overestimates the values obtained by the Our Worldin Data (OWID) platform but is close to the values provided by theOfficer of National Statistics (ONS) in the case of the UK (within theconfidence interval). In addition, Cronbach’s alpha gives values thatallow us to conclude that the reliability of the estimates in relationto the consistency of the answers is excellent for the UK and goodfor Australia. Following the same methodology, we have estimatedthe same metrics for different Chinese cities and provinces. It isworth noting that this approach allows quick estimates to be madewith a reduced number of surveys to achieve a wide populationcoverage, preserving the privacy of the participants.KEYWORDSCOVID-19, incidence estimation, indirect surveys, NSUM1 INTRODUCTIONTo effectively manage public health resources, monitoring infec-tious diseases such as COVID-19 requires knowledge of variousepidemic indicators, such as the number of cases, deaths, and hos-pitalizations, among others. Most of these indicators have beencollected through the use of methods that require the presenceof a substantial portion of the target population, such as antigentest screenings or hospital records. In order to overcome thesedisadvantages, several methods have used direct surveys to esti-mate indicators [ 1,2]. Unfortunately, direct surveys depend onthe participation of a large number of people to obtain reliableestimates, usually collect sensitive personal data (which may de-ter respondents due to privacy concerns), and require careful datamanipulation.An alternative to these surveys is using indirect surveys, whichask participants about the people in their contact network, ratherthan themselves. From the responses provided by indirect surveys,the estimates of different variables can be derived using NetworkScale-up Method (NSUM) [ 3,4]. As a result of this approach, 1) alarger sub-population may be reached, 2) data collection costs maybe reduced, 3) a computationally efficient method can be used toobtain estimates, and 4) participants will be assured of high levelsof privacy. Indirect surveys have already been implemented forestimating indicators during the COVID-19 pandemic [5, 6].In this work, we use indirect online surveys to capture a snapshotof cases, mortality, vaccination, and hospitalizations due to COVID-19 in China for the period of January 18-26, 2023. To this end, amodified version of the NSUM approach that fixes the number ofpeople in the contact network is used to estimate different epidemicindicators. In essence, this modified version extracts knowledgeabout epidemic indicators without resorting to additional controlquestions that usually are considered to estimate the reach (thenumber of people in the contact network). In addition, a data pre-processing stage is included, which comprises of a set consistencyfilters and a nonlinear outlier detection stage, to improve the reli-ability of the collected data. We validate our approach using datafrom Australia and the United Kingdom (UK) collected on January19, 2023. These metrics are compared with respect to the officialvalues reported by Our World in Data (OWID) and the Office forNational Statistics (ONS) from UK. In addition, we use Cronbach’salpha index [ 7], which is a reliability value to measure the internalconsistency of the questionnaire generated by indirect surveys.2 METHODS2.1 Sampling ParticipantsWe conducted online indirect surveys using the PollFish platform.Specifically, we conducted an online survey in China between Jan-uary 18-26, 2023. This online survey collected information aboutvarious COVID-19 indicators (vaccination, deaths, and number ofcases in the last month, the last 7 days, and the past 24 hours) amongthe 15 closest contacts of 1,000 participants (see SupplementaryInformation section for the English version of the survey questions).Notice that the selected number of closest contacts to respondents(15) is considered the size of the good-friends support group accord-ing to Dunbar’s theory [ 8]. This number provides us a trade-offbetween the size of the subpopulation we aim to cover (reach) andJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillothe minimization of undesired effects due to respondents such astransmission and recall errors [ 4]. Additionally, for validation, weconducted online surveys in Australia (100 responses) and the UK(200 responses) on January 19, 2023. Table 3 in Supplementary In-formation shows the characteristics of the survey respondents (theplatform provides information on gender, age group, education,and ethnicity). The respondents of each survey are also stratifiedby region. For instance, Fig. 1 in Supplementary Information showsa map of China where the intensity corresponds to the number ofquestionnaires completed in each province.2.2 Data AnalysisIn order to obtain a reliable dataset, we performed two subphasesof preprocessing: (1) an inconsistency filter, and (2) a univariateoutlier detection.(1)The inconsistency filter removes participants with inconsistentresponses: less infected contacts than fatalities, less infectedcontacts than hospitalized, less infected contacts in the lastmonth than in the last 7 days, and less infected contacts in thelast month than in the last 24 hours.(2)Since the collected variables exhibit extremely skewed distri-butions, the robust outlier detection method reported in [ 9]is applied. Based on the variable data, this method firstly es-timates the quartiles Q1andQ3, as well as the interquartilerange (IQR). Then, the whiskers QαandQβare set. Finally, thismethod preserves the samples in the interval limited by[Q1−1.5eaMCIQR;Q3+1.5ebMCIQR] (1)whereMCis the medcouple statistic that estimates the degree ofskewness of the data. Samples outside the interval are marked asoutliers and, consequently, are removed. In addition, to estimatethe parameters aandb, we consider the system [9] log23Q1−QαIQR≈aMClog23Qβ−Q3IQR≈bMC .(2)whereQαandQβare theα-th andβ-th quantiles of the distri-bution, with α=0.15andα=0.85.We consider the NSUM approach to estimate the rates of thedifferent COVID-19 indicators. In particular, NSUM is a statisticalframework for estimating hidden populations from indirect surveys.There are three main NSUM approaches: frequentist models thatestimate subpopulation rates, Bayesian models that include priors,and network models that estimate population properties [ 4]. Toestimate cumulative incidences, hospitalization rates, and mortalityrates, we modify an NSUM method belonging to the category offrequentist models based on the maximum likelihood estimation(MLE). In this regard, let cibe the number of contacts of the i-threspondent that have a particular characteristic, e.g., persons whohave been hospitalized. Further, consider rithe number of closecontacts of the i-th respondent (which in this study is fixed at ri=15, as shown in the questions in the Supplementary Information).The requirement of close contacts is introduced to minimize theeffect of the visibility bias [ 10] with respect to the classical method[3]. Hence, we estimate the aggregated rate, p, asÍici/Íiri=Íici/(15n), withnas the number of responses (samples). Theestimator’s variance is√︁p(1−p)/(15n), assuming that the ciareindependent binomial random variables with 15 trials and successprobabilityp.We evaluated the validity of our approach by comparing thedifference between the official values reported on the Our World inData (OWID)1platform and the values estimated by our approachfor Australia and the United Kingdom (see Table 1). In both coun-tries, official data were extracted between December 20, 2022, andJanuary 19, 2023. In order to determine the number of hospitalizedpersons given the hospital occupancy, the length of a hospital stayis fixed at 4 days [12, 13].Additionally, for the UK, we use the data provided by the Officefor National Statistics (ONS)2. In particular, for the number of caseswe use the daily estimates of the infected population obtainedby the Coronavirus (COVID-19) Infection Survey of the ONS. Forthe 7 days and the last month’s estimates, in order not to countmultiple times the same cases, the sum of the daily percentages isdivided by 10 days, an estimated average duration of the infectionwith Omicron [ 14]. Hospitalizations are the sum of the weeklyadmission rates with COVID-19 in England from Dec 19, 2022, toJan 22, 2023 (5 weeks). Mortality is the rate of registered deathsinvolving COVID-19 in England from Dec 17, 2022, to Jan 20, 2023.Finally, we use Cronbach’s Alpha coefficient to measure the reli-ability of the results obtained from the indirect surveys. Specifically,it quantifies the reliability of a value of an unobservable variableconstructed from the observed variables. The closer this coefficientis to its maximum value of 1, the greater the reliability of the mea-sure, but in general, it is considered that values greater than 0.7are sufficient to guarantee reliability. In this work, we computeCronbach’s Alpha coefficient based on correlations [15].3 RESULTSTable 1 displays the estimates and the 95% confidence interval forthe surveys conducted in the UK and Australia. In addition, it showsthe statistics provided by official reports. The confidence intervalis computed as p±1.96√︁p(1−p)/(15n). As can be observed, thevaccination estimates are very close to the official values: they areestimated as 76.50% (73.70% - 79.29%) and 78.86% (95% confidenceinterval: 77.00% - 80.72%) in Australia and UK, respectively, whilethe official (OWID) values are 84.95% and 79.71%. In the case ofmortality and hospitalizations in the last month, the official valuesare within the confidence interval of our estimates in the case ofAustralia. Specifically, the mortality rate is 0.34% (0.00% - 0.72%) andthe official is 0.005%, the hospitalization rate is 1.02% (0.36% - 1.68%)and the official is 0.112%. Also, in the case of the UK, the officialvalues of ONS are within the confidence interval of our estimates ofthe number of cases, new cases in the last 7 days, and cases in thelast 24 hours. Cronbach’s alpha coefficient is 0.83 for Australia and0.95 for the UK, which tells us that the reliability of the estimatesis very good. The results of the estimates and Cronbach’s alphacoefficient allow concluding that we can use the indirect surveyapproach to make estimates when official data is not available or1https://ourworldindata.org/, downloaded on July 24th, 2023. Observe that these valueshave changed from those downloaded in February 2023 [11].2https://www.ons.gov.uk/, downloaded on February 3rd, 2023.A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)Table 1: COVID-19 metrics in % (and 95% CI) obtained from indirect survey data and official reports for Australia and the UK. (1)People aged 12 years and over that have received at least one/two/three doses on Aug 31, 2022. (2) England data only, 5 weeks.Australia United KingdomIndirect Survey OWID Indirect Survey OWID ONSCases12.43 (10.26 - 14.60) 1.731 8.67 (7.39 - 9.96) 0.298 9.663(last month)Vaccination76.50 (73.70 - 79.29) 84.95 78.86 (77.00 - 80.72) 79.71 93.6/88.2/70.2(1)rateMortality0.34 (0.00 - 0.72) 0.005 0.43 (0.13 - 0.73) 0.006 0.005(2)(last month)Hospitalizations1.02 (0.36 - 1.68) 0.112 0.81 (0.40 - 1.22) 0.133 0.044(2)(last month)Cases2.03 (1.10 - 2.96) 0.118 1.30 (0.78 - 1.82) 0.037 1.458(24 hours)New cases2.71 (1.64 - 3.78) 0.118 1.30 (0.78 - 1.82) 0.023 1.116(7 days)Cronbach’s alpha 0.83 0.95Table 2: COVID-19 incidence metrics in % obtained from indirect survey data for China.SamplesCases Vaccination Mortality Hosp Cases Cases(last month) rate (last month) (last month) (24 hours) (7 days)China 46978.57 91.03 1.19 9.30 2.87 9.52(77.62-79.54) (90.36-91.70) (0.94-1.45) (8.61-9.97) (2.48-3.26) (8.83-10.21)ProvincesJiangsu 4875.56 87.92 1.67 7.64 2.64 9.44(72.42-78.69) (85.54-90.30) (0.73 - 2.60) (5.70-9.58) (1.47-3.81) (7.31-11.58)Guangdong 4580.00 86.07 0.59 5.33 3.26 6.96(76.98-83.02) (83.46-88.69) (0.01-1.17) (3.64-7.03) (1.92-4.60) (5.04-8.88)Shandong 2774.81 95.80 1.48 8.40 2.22 6.67(70.59 - 79.04) (93.85-97.76) (0.30-2.66) (5.69-11.10) (0.79-3.66) (4.24-9.10)CitiesShanghai 968.89 88.15 2.22 5.93 0.74 5.19(61.08-76.70) (82.70-93.60) (0.00-4.71) (1.94-9.91) (0.00-2.19) (1.44-8.93)Guangzhou 1181.82 86.67 1.82 9.70 4.85 7.27(75.93-87.70) (81.48-91.85) (0.00-3.86) (5.18-14.21) (1.57-8.13) (3.31-11.24)Chengdu 889.17 88.33 0.83 8.33 0.83 8.33(83.61-94.73) (82.59-94.08) (0.00-2.46) (3.39-13.28) (0.79-2.45) (3.39-13.28)Beijing 874.17 91.67 0.83 13.33 5.00 11.67(66.33-82.00) (86.72-96.61) (0.00-2.45) (7.25-19.42) (1.10-8.90) (5.92-17.41)reliable and use them considering a prudential bias when assessingthem.Table 2 shows the estimated results for China for all the questionsof the survey. While 1.000 indirect survey responses were collected,the filters specified in Section 2.2 were used, reducing drasticallythe sample size to 469. Comparing our results with the OWIDdata for China, the vaccination rate is 91.9% while we estimate91.03% (90.36%-91.7%), which is almost a perfect match. The numberof deaths reported by OWID is 0.005% while we estimate 1.19%(0.94%-1.45%), a much higher value. However, OWID warns that“the number of confirmed deaths may not accurately represent thetrue number of deaths”. Therefore, our estimate could serve asa first approximation (that may be biased). Our estimate of thenumber of cases in the last month is 78.57% (77.62%-79.54%), veryfar from 6.182% reported by OWID (which warns that “the numberof confirmed cases is lower than the true number of infections").Note that some areas of China may have a high incidence, as notedin the report published at [ 16]: “nearly 90% of Henan’s populationhad been infected by 6 January".We compute estimates for the provinces and cities with thelargest number of samples (see Table 2). The rate of vaccination andcases in the last month is similar in all of them and similar to the val-ues in China. The Guangdong province shows the lowest estimatesof hospitalizations and deaths, while it has large case estimatesamong provinces. Among cities, Beijing shows low estimates ofmonthly cases, but large rates of recent cases and hospitalizations.Unfortunately, the sample size for cities is very small. Finally, wewould like to point out that, in general, the data are relatively smallcompared to the size of the country. Additionally, as can be seenin Table 3 in Supplementary Information, the sample is biased byage and education level. These biases are reduced with the use ofindirect questions, but still more studies are needed.4 CONCLUSIONS AND FUTURE WORKThis work aims to estimate a snapshot of COVID-19 incidence,hospitalizations, and mortality from indirect surveys in China inJanuary 2023. To estimate these epidemic indicators, we used amodified version of the NSUM technique that fixes the number ofpeople in the contact network. In addition, a data pre-processingstage is included to extract a reliable set of survey samples. In futurework, we are interested in analyzing multiple data preprocessingtechniques to minimize the number of discarded samples and maxi-mize indirect survey knowledge extraction. Additional results anda more extended discussion can be found in the full version of thearticle [11].5 RESEARCH ETHICS APPROVALTo carry out this, a request was previously made before the ethicscommittee of IMDEA Network Institute, who approved it in theJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillolast quarter of 2022. Basically, the ethics committee approved thatthe study could be carried out keeping the anonymity of the re-spondents. On the other hand, the platform used for the collectionof survey information guarantees that the participants (belong tothat platform) give their consent to participate in them.6 CONFLICT OF INTEREST DISCLOSURESNone reported.7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateResearch Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.8 DATA SHARING STATEMENT:The data collected in the indirect surveys is publicly available athttps://github.com/GCGImdea/coronasurveys/tree/master/papers/2023-COVID-19-China-January.9 ACKNOWLEDGMENT:We want to thank Lin Wang for his help with the Chinese versionof the survey.
OLCYPVZAmhN
Review of the paper.
4: Good paper, accept
This study presents estimates for COVID incidence cases, deaths, and vaccination rates based on a survey study. Overall, the paper is really good in quality, clarity, originality and significance. The paper is well-written; however, there are a few areas that the authors should address: 1. In section 2.1, the authors conducted an online survey in Australia and the UK for validation. It would be beneficial for the authors to provide justification for selecting these specific countries. For example, they should explain why China was not included in the online survey. 2. If space allows, it would be helpful to include a figure illustrating the skewness in the data, as discussed in Section 2.2. This figure could demonstrate the requirement of Medcouple statistics. 3. Please include a reference for the Cronbach's alpha coefficient. This would provide readers with additional information and support the use of this measure. 4. To enhance the paper's transparency, the authors should clarify how the 95% confidence interval (C.I.) was computed in Table 1 and Table 2. 5. It is unclear how a small sample size, such as the one used for all the cities, can be utilized to derive the confidence interval and make any valid claims. By addressing these points, the authors can further improve the clarity and comprehensiveness of their work.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
rMSlLb33Gb
KDD.org/2023/Workshop/epiDAMIK
2023
A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)
["Juan Marcos Ramirez", "Sergio Diaz-Aranda", "Jose Aguilar", "Oluwasegun Ojo", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
The estimation of incidence has been a crucial component for monitoring COVID-19 dissemination. This has become challenging when official data are unavailable or insufficiently reliable. Hence, the implementation of efficient, inexpensive, and secure techniques that capture information about epidemic indicators is required. This study aims to provide a snapshot of COVID-19 incidence, hospitalizations, and mortality in different countries in January 2023. To this end, we collected data on the number of cases, deaths, vaccinations, and hospitalizations among the fifteen closest contacts to survey respondents. More precisely, indirect surveys were conducted for 100 respondents from Australia on 19 January 2023, 200 respondents from the UK on 19 January 2023, and 1,000 respondents from China between 18-26 January 2023. To assess the incidence of COVID-19, we used a modified version Network Scale-up Method (NSUM) that fixes the number of people in the contact network (reach). We have compared our estimates with official data from Australia and the UK in order to validate our approach. In the case of the vaccination rate, our approach estimates a very close value to the official data, and in the case of hospitalizations and deaths, the official results are within the confidence interval. Regarding the remaining variables, our approach overestimates the values obtained by the Our World in Data (OWID) platform but is close to the values provided by the Officer of National Statistics (ONS) in the case of the UK (within the confidence interval). In addition, Cronbach's alpha gives values that allow us to conclude that the reliability of the estimates in relation to the consistency of the answers is excellent for the UK and good for Australia. Following the same methodology, we have estimated the same metrics for different Chinese cities and provinces. It is worth noting that this approach allows quick estimates to be made with a reduced number of surveys to achieve a wide population coverage, preserving the privacy of the participants.
["COVID-19", "incidence estimation", "indirect surveys", "NSUM"]
ABSTRACTThe estimation of incidence has been a crucial component for moni-toring COVID-19 dissemination. This has become challenging whenofficial data are unavailable or insufficiently reliable. Hence, the im-plementation of efficient, inexpensive, and secure techniques thatcapture information about epidemic indicators is required. Thisstudy aims to provide a snapshot of COVID-19 incidence, hospital-izations, and mortality in different countries in January 2023. To thisend, we collected data on the number of cases, deaths, vaccinations,and hospitalizations among the fifteen closest contacts to survey re-spondents. More precisely, indirect surveys were conducted for 100respondents from Australia on 19 January 2023, 200 respondentsfrom the UK on 19 January 2023, and 1,000 respondents from Chinabetween 18-26 January 2023. To assess the incidence of COVID-19,we used a modified version Network Scale-up Method (NSUM) thatfixes the number of people in the contact network (reach). We havecompared our estimates with official data from Australia and theUK in order to validate our approach. In the case of the vaccinationrate, our approach estimates a very close value to the official data,and in the case of hospitalizations and deaths, the official results arewithin the confidence interval. Regarding the remaining variables,our approach overestimates the values obtained by the Our Worldin Data (OWID) platform but is close to the values provided by theOfficer of National Statistics (ONS) in the case of the UK (within theconfidence interval). In addition, Cronbach’s alpha gives values thatallow us to conclude that the reliability of the estimates in relationto the consistency of the answers is excellent for the UK and goodfor Australia. Following the same methodology, we have estimatedthe same metrics for different Chinese cities and provinces. It isworth noting that this approach allows quick estimates to be madewith a reduced number of surveys to achieve a wide populationcoverage, preserving the privacy of the participants.KEYWORDSCOVID-19, incidence estimation, indirect surveys, NSUM1 INTRODUCTIONTo effectively manage public health resources, monitoring infec-tious diseases such as COVID-19 requires knowledge of variousepidemic indicators, such as the number of cases, deaths, and hos-pitalizations, among others. Most of these indicators have beencollected through the use of methods that require the presenceof a substantial portion of the target population, such as antigentest screenings or hospital records. In order to overcome thesedisadvantages, several methods have used direct surveys to esti-mate indicators [ 1,2]. Unfortunately, direct surveys depend onthe participation of a large number of people to obtain reliableestimates, usually collect sensitive personal data (which may de-ter respondents due to privacy concerns), and require careful datamanipulation.An alternative to these surveys is using indirect surveys, whichask participants about the people in their contact network, ratherthan themselves. From the responses provided by indirect surveys,the estimates of different variables can be derived using NetworkScale-up Method (NSUM) [ 3,4]. As a result of this approach, 1) alarger sub-population may be reached, 2) data collection costs maybe reduced, 3) a computationally efficient method can be used toobtain estimates, and 4) participants will be assured of high levelsof privacy. Indirect surveys have already been implemented forestimating indicators during the COVID-19 pandemic [5, 6].In this work, we use indirect online surveys to capture a snapshotof cases, mortality, vaccination, and hospitalizations due to COVID-19 in China for the period of January 18-26, 2023. To this end, amodified version of the NSUM approach that fixes the number ofpeople in the contact network is used to estimate different epidemicindicators. In essence, this modified version extracts knowledgeabout epidemic indicators without resorting to additional controlquestions that usually are considered to estimate the reach (thenumber of people in the contact network). In addition, a data pre-processing stage is included, which comprises of a set consistencyfilters and a nonlinear outlier detection stage, to improve the reli-ability of the collected data. We validate our approach using datafrom Australia and the United Kingdom (UK) collected on January19, 2023. These metrics are compared with respect to the officialvalues reported by Our World in Data (OWID) and the Office forNational Statistics (ONS) from UK. In addition, we use Cronbach’salpha index [ 7], which is a reliability value to measure the internalconsistency of the questionnaire generated by indirect surveys.2 METHODS2.1 Sampling ParticipantsWe conducted online indirect surveys using the PollFish platform.Specifically, we conducted an online survey in China between Jan-uary 18-26, 2023. This online survey collected information aboutvarious COVID-19 indicators (vaccination, deaths, and number ofcases in the last month, the last 7 days, and the past 24 hours) amongthe 15 closest contacts of 1,000 participants (see SupplementaryInformation section for the English version of the survey questions).Notice that the selected number of closest contacts to respondents(15) is considered the size of the good-friends support group accord-ing to Dunbar’s theory [ 8]. This number provides us a trade-offbetween the size of the subpopulation we aim to cover (reach) andJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillothe minimization of undesired effects due to respondents such astransmission and recall errors [ 4]. Additionally, for validation, weconducted online surveys in Australia (100 responses) and the UK(200 responses) on January 19, 2023. Table 3 in Supplementary In-formation shows the characteristics of the survey respondents (theplatform provides information on gender, age group, education,and ethnicity). The respondents of each survey are also stratifiedby region. For instance, Fig. 1 in Supplementary Information showsa map of China where the intensity corresponds to the number ofquestionnaires completed in each province.2.2 Data AnalysisIn order to obtain a reliable dataset, we performed two subphasesof preprocessing: (1) an inconsistency filter, and (2) a univariateoutlier detection.(1)The inconsistency filter removes participants with inconsistentresponses: less infected contacts than fatalities, less infectedcontacts than hospitalized, less infected contacts in the lastmonth than in the last 7 days, and less infected contacts in thelast month than in the last 24 hours.(2)Since the collected variables exhibit extremely skewed distri-butions, the robust outlier detection method reported in [ 9]is applied. Based on the variable data, this method firstly es-timates the quartiles Q1andQ3, as well as the interquartilerange (IQR). Then, the whiskers QαandQβare set. Finally, thismethod preserves the samples in the interval limited by[Q1−1.5eaMCIQR;Q3+1.5ebMCIQR] (1)whereMCis the medcouple statistic that estimates the degree ofskewness of the data. Samples outside the interval are marked asoutliers and, consequently, are removed. In addition, to estimatethe parameters aandb, we consider the system [9] log23Q1−QαIQR≈aMClog23Qβ−Q3IQR≈bMC .(2)whereQαandQβare theα-th andβ-th quantiles of the distri-bution, with α=0.15andα=0.85.We consider the NSUM approach to estimate the rates of thedifferent COVID-19 indicators. In particular, NSUM is a statisticalframework for estimating hidden populations from indirect surveys.There are three main NSUM approaches: frequentist models thatestimate subpopulation rates, Bayesian models that include priors,and network models that estimate population properties [ 4]. Toestimate cumulative incidences, hospitalization rates, and mortalityrates, we modify an NSUM method belonging to the category offrequentist models based on the maximum likelihood estimation(MLE). In this regard, let cibe the number of contacts of the i-threspondent that have a particular characteristic, e.g., persons whohave been hospitalized. Further, consider rithe number of closecontacts of the i-th respondent (which in this study is fixed at ri=15, as shown in the questions in the Supplementary Information).The requirement of close contacts is introduced to minimize theeffect of the visibility bias [ 10] with respect to the classical method[3]. Hence, we estimate the aggregated rate, p, asÍici/Íiri=Íici/(15n), withnas the number of responses (samples). Theestimator’s variance is√︁p(1−p)/(15n), assuming that the ciareindependent binomial random variables with 15 trials and successprobabilityp.We evaluated the validity of our approach by comparing thedifference between the official values reported on the Our World inData (OWID)1platform and the values estimated by our approachfor Australia and the United Kingdom (see Table 1). In both coun-tries, official data were extracted between December 20, 2022, andJanuary 19, 2023. In order to determine the number of hospitalizedpersons given the hospital occupancy, the length of a hospital stayis fixed at 4 days [12, 13].Additionally, for the UK, we use the data provided by the Officefor National Statistics (ONS)2. In particular, for the number of caseswe use the daily estimates of the infected population obtainedby the Coronavirus (COVID-19) Infection Survey of the ONS. Forthe 7 days and the last month’s estimates, in order not to countmultiple times the same cases, the sum of the daily percentages isdivided by 10 days, an estimated average duration of the infectionwith Omicron [ 14]. Hospitalizations are the sum of the weeklyadmission rates with COVID-19 in England from Dec 19, 2022, toJan 22, 2023 (5 weeks). Mortality is the rate of registered deathsinvolving COVID-19 in England from Dec 17, 2022, to Jan 20, 2023.Finally, we use Cronbach’s Alpha coefficient to measure the reli-ability of the results obtained from the indirect surveys. Specifically,it quantifies the reliability of a value of an unobservable variableconstructed from the observed variables. The closer this coefficientis to its maximum value of 1, the greater the reliability of the mea-sure, but in general, it is considered that values greater than 0.7are sufficient to guarantee reliability. In this work, we computeCronbach’s Alpha coefficient based on correlations [15].3 RESULTSTable 1 displays the estimates and the 95% confidence interval forthe surveys conducted in the UK and Australia. In addition, it showsthe statistics provided by official reports. The confidence intervalis computed as p±1.96√︁p(1−p)/(15n). As can be observed, thevaccination estimates are very close to the official values: they areestimated as 76.50% (73.70% - 79.29%) and 78.86% (95% confidenceinterval: 77.00% - 80.72%) in Australia and UK, respectively, whilethe official (OWID) values are 84.95% and 79.71%. In the case ofmortality and hospitalizations in the last month, the official valuesare within the confidence interval of our estimates in the case ofAustralia. Specifically, the mortality rate is 0.34% (0.00% - 0.72%) andthe official is 0.005%, the hospitalization rate is 1.02% (0.36% - 1.68%)and the official is 0.112%. Also, in the case of the UK, the officialvalues of ONS are within the confidence interval of our estimates ofthe number of cases, new cases in the last 7 days, and cases in thelast 24 hours. Cronbach’s alpha coefficient is 0.83 for Australia and0.95 for the UK, which tells us that the reliability of the estimatesis very good. The results of the estimates and Cronbach’s alphacoefficient allow concluding that we can use the indirect surveyapproach to make estimates when official data is not available or1https://ourworldindata.org/, downloaded on July 24th, 2023. Observe that these valueshave changed from those downloaded in February 2023 [11].2https://www.ons.gov.uk/, downloaded on February 3rd, 2023.A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)Table 1: COVID-19 metrics in % (and 95% CI) obtained from indirect survey data and official reports for Australia and the UK. (1)People aged 12 years and over that have received at least one/two/three doses on Aug 31, 2022. (2) England data only, 5 weeks.Australia United KingdomIndirect Survey OWID Indirect Survey OWID ONSCases12.43 (10.26 - 14.60) 1.731 8.67 (7.39 - 9.96) 0.298 9.663(last month)Vaccination76.50 (73.70 - 79.29) 84.95 78.86 (77.00 - 80.72) 79.71 93.6/88.2/70.2(1)rateMortality0.34 (0.00 - 0.72) 0.005 0.43 (0.13 - 0.73) 0.006 0.005(2)(last month)Hospitalizations1.02 (0.36 - 1.68) 0.112 0.81 (0.40 - 1.22) 0.133 0.044(2)(last month)Cases2.03 (1.10 - 2.96) 0.118 1.30 (0.78 - 1.82) 0.037 1.458(24 hours)New cases2.71 (1.64 - 3.78) 0.118 1.30 (0.78 - 1.82) 0.023 1.116(7 days)Cronbach’s alpha 0.83 0.95Table 2: COVID-19 incidence metrics in % obtained from indirect survey data for China.SamplesCases Vaccination Mortality Hosp Cases Cases(last month) rate (last month) (last month) (24 hours) (7 days)China 46978.57 91.03 1.19 9.30 2.87 9.52(77.62-79.54) (90.36-91.70) (0.94-1.45) (8.61-9.97) (2.48-3.26) (8.83-10.21)ProvincesJiangsu 4875.56 87.92 1.67 7.64 2.64 9.44(72.42-78.69) (85.54-90.30) (0.73 - 2.60) (5.70-9.58) (1.47-3.81) (7.31-11.58)Guangdong 4580.00 86.07 0.59 5.33 3.26 6.96(76.98-83.02) (83.46-88.69) (0.01-1.17) (3.64-7.03) (1.92-4.60) (5.04-8.88)Shandong 2774.81 95.80 1.48 8.40 2.22 6.67(70.59 - 79.04) (93.85-97.76) (0.30-2.66) (5.69-11.10) (0.79-3.66) (4.24-9.10)CitiesShanghai 968.89 88.15 2.22 5.93 0.74 5.19(61.08-76.70) (82.70-93.60) (0.00-4.71) (1.94-9.91) (0.00-2.19) (1.44-8.93)Guangzhou 1181.82 86.67 1.82 9.70 4.85 7.27(75.93-87.70) (81.48-91.85) (0.00-3.86) (5.18-14.21) (1.57-8.13) (3.31-11.24)Chengdu 889.17 88.33 0.83 8.33 0.83 8.33(83.61-94.73) (82.59-94.08) (0.00-2.46) (3.39-13.28) (0.79-2.45) (3.39-13.28)Beijing 874.17 91.67 0.83 13.33 5.00 11.67(66.33-82.00) (86.72-96.61) (0.00-2.45) (7.25-19.42) (1.10-8.90) (5.92-17.41)reliable and use them considering a prudential bias when assessingthem.Table 2 shows the estimated results for China for all the questionsof the survey. While 1.000 indirect survey responses were collected,the filters specified in Section 2.2 were used, reducing drasticallythe sample size to 469. Comparing our results with the OWIDdata for China, the vaccination rate is 91.9% while we estimate91.03% (90.36%-91.7%), which is almost a perfect match. The numberof deaths reported by OWID is 0.005% while we estimate 1.19%(0.94%-1.45%), a much higher value. However, OWID warns that“the number of confirmed deaths may not accurately represent thetrue number of deaths”. Therefore, our estimate could serve asa first approximation (that may be biased). Our estimate of thenumber of cases in the last month is 78.57% (77.62%-79.54%), veryfar from 6.182% reported by OWID (which warns that “the numberof confirmed cases is lower than the true number of infections").Note that some areas of China may have a high incidence, as notedin the report published at [ 16]: “nearly 90% of Henan’s populationhad been infected by 6 January".We compute estimates for the provinces and cities with thelargest number of samples (see Table 2). The rate of vaccination andcases in the last month is similar in all of them and similar to the val-ues in China. The Guangdong province shows the lowest estimatesof hospitalizations and deaths, while it has large case estimatesamong provinces. Among cities, Beijing shows low estimates ofmonthly cases, but large rates of recent cases and hospitalizations.Unfortunately, the sample size for cities is very small. Finally, wewould like to point out that, in general, the data are relatively smallcompared to the size of the country. Additionally, as can be seenin Table 3 in Supplementary Information, the sample is biased byage and education level. These biases are reduced with the use ofindirect questions, but still more studies are needed.4 CONCLUSIONS AND FUTURE WORKThis work aims to estimate a snapshot of COVID-19 incidence,hospitalizations, and mortality from indirect surveys in China inJanuary 2023. To estimate these epidemic indicators, we used amodified version of the NSUM technique that fixes the number ofpeople in the contact network. In addition, a data pre-processingstage is included to extract a reliable set of survey samples. In futurework, we are interested in analyzing multiple data preprocessingtechniques to minimize the number of discarded samples and maxi-mize indirect survey knowledge extraction. Additional results anda more extended discussion can be found in the full version of thearticle [11].5 RESEARCH ETHICS APPROVALTo carry out this, a request was previously made before the ethicscommittee of IMDEA Network Institute, who approved it in theJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillolast quarter of 2022. Basically, the ethics committee approved thatthe study could be carried out keeping the anonymity of the re-spondents. On the other hand, the platform used for the collectionof survey information guarantees that the participants (belong tothat platform) give their consent to participate in them.6 CONFLICT OF INTEREST DISCLOSURESNone reported.7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateResearch Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.8 DATA SHARING STATEMENT:The data collected in the indirect surveys is publicly available athttps://github.com/GCGImdea/coronasurveys/tree/master/papers/2023-COVID-19-China-January.9 ACKNOWLEDGMENT:We want to thank Lin Wang for his help with the Chinese versionof the survey.
O4i2dKYWGf
The paper proposes an indirect survey method and the Network Scale-up Method (NSUM) to estimate COVID-19 indicators. While the methodology and data preprocessing are explained well, the lack of detailed discussion on NSUM and omission of related works utilizing NSUM raise concerns about the novelty and completeness of the research.
2: Marginally below acceptance threshold
In this paper's introduction, the challenge of obtaining reliable COVID-19 data is explained due to the need for target data and costly tests. To address this issue, the authors suggest utilizing an indirect survey method that involves a fixed number of participants. They also propose the NSUM method to estimate various epidemic indicators. However, I have noticed that the NSUM method for constructing a contact network in COVID-19 cases has been utilized before but is not mentioned in the related works. The methodology used in the paper is reliable. The survey gathered data from 15 contacts of 1000 participants concerning various COVID-19 indicators. However, the paper did not specify how the participants were selected or if their contacts were mutually exclusive. For instance, if the participants were hospital staff, their contacts would differ from those who work remotely. The paper clearly explained the data preprocessing, utilizing inconsistency filters and univariate outlier detection to eliminate anomalies while accounting for skewed data. Nevertheless, the NSUM method was not extensively discussed. In the methodology section, the researchers assessed their findings in the UK and Australia by contrasting them with the official results on the OWID platform. Although the mortality rate differed significantly, Cronbach’s alpha score was high, indicating strong internal consistency. Notably, the vaccination rate result closely matched the actual value. In my opinion, the sample size should have been larger as the filters have significantly decreased the number of sample surveys available. The paper is well-organized, with each section thoroughly explained. However, I noticed that the methodology of NSUM is missing, which plays a vital role in constructing the network graph and producing the results. I also came across some related works that utilize NSUM from survey data in COVID-19 cases, which were not included in the relevant work section. Although the problem they were addressing is undoubtedly significant, I remain somewhat skeptical about the novelty of this work. This paper does not heavily rely on mathematical formulas or algorithms, making it less technical in nature. However, the equations presented in the paper seem to be accurate to my understanding. To summarize, the paper is well-written and presents a clear problem definition. However, the methodology could use further development, and there is a lack of reference to recent work on the same problem.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
qkDCSV-RMt
KDD.org/2023/Workshop/epiDAMIK
2023
Spectral Clustering Identifies High-risk Opioid Tapering Trajectories Associated with Adverse Events
["MONIKA RAY", "Joshua J. Fenton", "Patrick Romano"]
National opioid prescribing guidelines and related quality measures have stimulated changes in opioid prescribing. Studies have shown that rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However, we do not know enough about the trajectories of dose tapering implemented in clinical practice, and how heterogeneous populations of patients respond to different treatments. Our aim was to examine prescribed opioid doses in a large, longitudinal, clinically diverse, national population of opioid-dependent patients with either Medicare or commercial insurance. We performed phenotype clustering to identify unsuspected, novel patterns in the data. In a longitudinal cohort (2008-2018) of 113,618 patients from the OptumLabs Data Warehouse with 12 consecutive months at a high, stable mean opioid dose ($\geq$50 morphine milligram equivalents), we identified 30,932 patients with one dose tapering phase that began at the first 60-day period with $\geq$15\% reduction in average daily dose across overlapping 60-day windows through seven months of follow-up. We applied spectral clustering as we preferred an assumption-free approach with no apriori information being imposed. Spectral clustering identified several cluster-cohorts, with three that included over 98\% of the sample. These three clusters were similar in baseline characteristics, but differed markedly in the magnitude, velocity, duration, and endpoint of tapering. The cluster-cohort characterised by moderately rapid, steady tapering, most often to an end opioid dose of zero, had excess drug-related events, mental health events, and deaths, compared with a cluster characterised by very slow, steady tapering with long-term opioid maintenance. Moderately rapid tapering to discontinuation may be associated with higher risk than slow tapering with longer-term maintenance of opioid analgesia. Furthermore, several clusters highlighted a cohort that had complete taper reversals indicating a treatment failure as the tapering was not maintained. Our findings suggests that identifying subtle yet clinically meaningful patterns in opioid prescribing data, such as patterns within the dose trajectories, can highlight the distinct characteristics separating subpopulations.
["high dose opioids", "spectral clustering", "patient subpopulations", "personalised medicine", "healthcare", "opioid crisis", "phenotype clustering"]
ABSTRACTNational opioid prescribing guidelines and related quality measureshave stimulated changes in opioid prescribing. Studies have shownthat rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However,there isn’t enough research on trajectories of dose tapering imple-mented in clinical practice, and how heterogeneous populations ofpatients respond to different treatments. Our aim was to examineprescribed opioid doses in a large, longitudinal, clinically diverse,national population of opioid-dependent patients with either Medi-care or commercial insurance. We performed phenotype clusteringto identify unsuspected, novel patterns in the data. In a longitu-dinal cohort (2008-2018) of 113,618 patients from the OptumLabsData Warehouse with 12 consecutive months at a high, stable meanopioid dose (≥50 morphine milligram equivalents), we identified30,932 patients with one dose tapering phase that began at the first60-day period with ≥15% reduction in average daily dose acrossoverlapping 60-day windows through seven months of follow-up.We applied spectral clustering as we preferred an assumption-freeapproach with no apriori information being imposed. Spectral clus-tering identified several cluster-cohorts, with three that includedover 98% of the sample. These three clusters were similar in baselinecharacteristics, but differed markedly in the magnitude, velocity, du-ration, and endpoint of tapering. The cluster-cohort characterisedby moderately rapid, steady tapering, most often to an end opioiddose of zero, had excess drug-related events, mental health events,and deaths, compared with a cluster characterised by very slow,steady tapering with long-term opioid maintenance. Moderatelyrapid tapering to discontinuation may be associated with higherrisk than slow tapering with longer-term maintenance of opioidanalgesia. Furthermore, several clusters highlighted a cohort thathad complete taper reversals indicating a treatment failure as thetapering was not maintained. Our findings suggest that identify-ing subtle yet clinically meaningful patterns in opioid prescribingdata, such as patterns within the dose trajectories, can highlightthe distinct characteristics separating subpopulations.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).CCS CONCEPTS•Applied computing →Health informatics ;Physical sciencesand engineering .KEYWORDShigh dose opioids, spectral clustering, patient subpopulations, phe-notype clustering, opioid crisisACM Reference Format:Monika Ray, Joshua J. Fenton, and Patrick S. Romano. 2023. Spectral Clus-tering Identifies High-risk Opioid Tapering Trajectories Associated with Ad-verse Events. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDD InternationalWorkshop on Epidemiology meets Data Mining and Knowledge Discovery,August 7, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 9 pages.1 INTRODUCTIONNational prescribing guidelines by the Centers for Disease Controland Prevention (CDC) and the current opioid overdose crisis haveled to substantial dose tapering among patients on long-term opioidtherapy for chronic pain, especially since 2016 [ 10,16,30]. A qualitymetric endorsed by the National Quality Forum (NQF) encouragesprescribers to reduce opioid doses below 90 morphine milligramequivalents (MME) per day [ 33]. In the setting of long-term opi-oid therapy for chronic pain, several studies have shown worseoutcomes associated with rapid dose reduction [ 1,13,17,41] anddose tapering has emerged as a complex issue for both physiciansand patients. To better inform evidence-based clinical practices,health system policies, and public programmes, it is necessary tocharacterise population heterogeneity (phenotype clustering) andto understand which patients are appropriate candidates for dif-ferent tapering approaches. This type of research requires a betterunderstanding of the variety of tapering trajectories that cliniciansimplement in diverse populations to enable comparisons of the risksand benefits of alternative approaches in relevant subpopulations.Large healthcare data warehouses that accumulate longitudinalrecords from multiple sources offer great opportunities for im-proved understanding of population heterogeneity in opioid dosemanagement.To undertake this research, we used retrospective data from theOptumLabs Data Warehouse (OLDW), which includes longitudinalhealth information for over 109 million commercial enrollees and12.5 million Medicare Advantage enrollees. We leveraged the ret-rospective cohort previously created by Agnoli and colleagues [ 1],whose prior research suggested that the peak tapering velocity hasepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. Romanoa significant mean effect on adverse outcomes. However, opioid-dependent patients with chronic pain often resist any dose reduc-tion, while pharmacies and regulators encourage dose reduction forevery eligible patient. To inform better clinical practice and policies,we need to understand how the peak tapering velocity fits into over-all patterns of opioid dose management over time, and then explorethe characteristics of higher- and lower-risk subpopulations of pa-tients undergoing dose tapering. For this purpose, we used spectralclustering to describe clinically meaningful subpopulations. Specif-ically, we wanted to examine similarities among patients withina cluster and differences among patients across clusters. Spectralclustering has been applied to speech processing, computer visionand exploratory data mining in biology [ 3,6,11,21,38,42], butopioid dosing is a novel and highly topical application in the currentera of increasing opioid-related overdose death rates [15].This work deviates from the popular hypothesis-driven approacheswhere the functional form of the models are independent predic-tors and dependent outcomes. In this data-driven approach theaim is to first cluster phenotypes, without classifying features asindependent or dependent variables, and then identify meaningfulsignatures within these clusters [ 25]. These signatures can then beused in predictive models as either predictors or outcomes. Themain purpose of phenotype clustering is to uncover hidden pat-terns. The primary focus of our exploratory work is see (1) how thepatients cluster based on their phenotypes (grouping patterns orphenotypes) and (2) whether these clusters have any remarkabledifferences (i.e., identify signatures that can be used in predictiveanalytics).1.1 Data Cohort and Adverse EventsWe obtained data from 2008-2018 for adults from the OptumLabsData Warehouse (OLDW) which contains de-identified adminis-trative claims data, including medical and pharmacy claims andeligibility information for commercial and Medicare Advantage en-rollees, representing a mixture of ages and regions across the UnitedStates. The entire cohort, which we received from Agnoli and col-leagues [ 1], had a stable baseline period of 12 consecutive monthsat a high opioid dose ≥50 MME, resulting in 113,618 patients. Thetapered cohort was defined as the subset of patients who had a dosetapering phase, which began on the first 60-day period with ≥15%reduction in average daily dose across overlapping 60-day windowsthrough the initial seven months of follow-up. Patients who had≥15% reduction in average daily dose over a longer time frame werenot included due to uncertainty about the intent of slight MMEdose reductions (which could be driven by delays in picking upprescriptions). To facilitate interpretation we selected a populationof patients who had only one period of tapering. Mortality in thetapered cohort was determined by analysing the time after taperinitiation and matching against the records in the OLDW mortalitytable.Adverse events included emergency department (ED) visits orhospitalisations for (1) drug or alcohol overdose or withdrawal(drug-related events); and (2) depression, anxiety, or suicide at-tempts (mental health events). Drug-related and mental healthevents were identified using International Classification of Diseases,Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codesfor claims from October 2015 through 2019 and ICD-9-CM diagnosiscodes for claims from 2008 through September 2015. Comorbiditieswere identified for all patients using the available software (AHRQ"Elixhauser" Comorbidity Software) in the OLDW [ 12,29]. Thisproject was determined by the University of California Office of thePresident to be exempt from human subjects review, as the OLDWuses completely de-identified, anonymised data.1.2 Analytic MethodsWe considered several methods to identify subpopulations and theircharacteristics such as K−Means clustering and latent class analy-sis (LCA).K−Means clustering is a popular clustering algorithmbut it is based on many restrictive assumptions, which most real-world datasets violate [ 20,35]. The algorithm operates on the inputdata matrix and, hence, is sensitive to the size of the data ( N) as wellas number of features. LCA [ 23,43], a type of finite mixture model,may be suitable for describing dose trajectories, but it requiresan outcome to be specified. By comparison, spectral clustering ispurely unsupervised and does not require outcome variables. Forour analyses, we used a novel spectral clustering algorithm (Spec-trum) developed by John and colleagues [ 21]. Spectral graph theoryassociates the spectrum of a matrix, i.e. eigenvalues of a matrix,to the properties of a graph via the Laplacian matrix [ 7,8,37]. Itoperates on graphs that are constructed between neighbouringnodes that represent data points (i.e., patients). It identifies arbitrar-ily shaped clusters (with convex or non-convex boundaries) usingthe eigenvectors in the Laplacian similarity matrix [ 7,9,26,46].A Laplacian similarity matrix models the local neighborhood rela-tionships between data points as an undirected graph [ 4,37,40].Spectral clustering is robust to the geometry of the clusters andoutliers, and does not require the user to specify the number ofclusters [ 2,24,46]. It identifies the number of clusters by comput-ing the differences between the consecutive ordered eigenvaluesof the graph Laplacian and identifying the first pair of consecutiveeigenvalues with the maximum difference in their values.The steps of spectral clustering include - (1) creation of the sim-ilarity matrix, then (2) the creation of the Laplacian matrix, andfinally (3) creation of clusters [ 32,44]. Variations of spectral clus-tering algorithms address issues related to creation of the similaritymatrix, graph-partitioning and speed on massive datasets. Sincespectral clustering operates on the Laplacian similarity matrix,which is an NxNmatrix ofNdata points, it is sensitive to the sizeof the data. The Spectrum algorithm developed by John et al., isnovel in the way it combines the following features - (1) combinedZelnik-Manor self-tuning [ 49], and the Zhang density-aware [ 50]kernels to create the similarity matrix, (2) Ng spectral clusteringmethod to estimate the optimal number of clusters [ 31], and Gauss-ian mixture modelling (GMM) [ 47] to finally cluster the data, and (3)a fast approximate spectral clustering (FASP) method [ 48] to allowfor fast clustering of massive data on regular desktop machines.The self-tuning component of the kernel adjusts to the scale ofthe data, while the density-aware component adapts to the localdensity of the data creating more or fewer connections dependingon the density of the regions. Spectrum uses the diffusion of tensorproduct graphs (TPG) to capture higher order information in thedata and highlight underlying patterns in the data [ 39]. The finalSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAclusters are plotted using the first two principal components, PC1and PC2. We did not use the eigen gap-statistic to determine thenumber of clusters as it was not essential for us to constrain thenumber of clusters nor were we against identifying small cohortsif the cohort had important patterns to investigate further. In ourwork, we were searching for anomalies or ‘interesting patterns’that could explain the underlying population heterogeneity. Theeigen gap heuristic works well if there are well-defined clustersbut not of much help when there are noisy or overlapping clusters,which is likely to be the case in this data.The variables in the input space of the spectral clustering algo-rithm were age, gender, monthly average opioid dose (MME), meanbaseline dose, count of drug-related events in the pre-taper and aftertapering initiation phases, the number of mental health events inthe pre-taper and after tapering initiation phases, benzodiazepinesco-prescription at baseline and at 30 days, 31 Elixhauser comor-bidity flags, and the change in dose across consecutive months for12 months. The number of drug-related and mental health eventswere identified for each patient before taper and after taper initi-ation as these were the adverse events of interest. We reviewedeach cluster to identify the prevalence of different adverse eventsas well as the number of deaths after taper initiation. We report thedistinguishing characteristics across the cluster subpopulations. Forcounterfactual inference, we identified the number and proportionof drug-related and mental health events in each cluster, and thencomputed the excess number of those events relative to the nullassumption of equal event risk across all clusters. The counterfac-tual calculation for each adverse event is given by - ExcessEvents =(NumEventsCluster )−(NumPatientsCluster ∗(TotalEventsTotalPatients)),where, for each adverse event, i.e., mortality, drug-related events ormental health events, ExcessEvents is the number of excess eventsin the cluster, NumEventsCluster is the number of observed eventswithin the cluster, NumPatientsCluster is the number of patients inthe cluster, TotalEvents is the total number of adverse events in theentire data and TotalPatients is the total number of patients in theanalysis.2 RESULTSAmong the 113,618 patients in the entire cohort 33,628 had one ormore phases of opioid dose tapering (29.5%) based on the taperingdefinition of≥15% reduction in average daily dose in 7-months offollow-up [ 1]. Fig. 1 shows the analytical pipeline and the resultantplot of the 10 clusters identified. We could not show all the tenclusters clearly in a 2-D plot. Since spectral clustering plots theclusters by collapsing them onto the first two principal components,the multi-dimensional aspect of the clusters is not visible. However,Fig. 1 shows that the clusters are not spherical and the data hasoutliers. Table 1 shows the characteristics of patients who tapered;the sample was 54% female and 92% had only one tapering periodavailable for analysis.Spectral clustering of 30,932 patients who underwent single ta-pers resulted in 10 clusters (groups of patients or subpopulations)with relatively similar baseline characteristics. All clusters hadpatients with high mean baseline doses of 140-237 MME/day. Ofparticular interest were the three large clusters and their baselinecharacteristics shown in Table 2. The other seven clusters’ charac-teristics are discussed below but not shown due to small cell sizepolicy. The three large clusters (1, 2, and 10) were very similar de-mographically, with mean ages of 58.7, 57.0, and 58.4 years, and 56%,53%, and 50% female composition, respectively. They were also sim-ilar on baseline co-prescribing of benzodiazepines (29%, 30%, and30%, respectively) and comorbid diagnoses during the baseline year,such as alcohol abuse and dependence (2%, 3%, and 2%, respectively),drug abuse and dependence (17%, 17%, and 15%, respectively), anddepression (32%, 31%, and 30%, respectively). Furthermore, theyhad similar medical experiences during their pre-taper period ofstable opioid dosing, with relatively few drug-related events (mean0.042, 0.053, and 0.043, respectively) and more mental health events(mean 3.81, 4.03, and 3.66, respectively).Fig. 2 compares the tapering trajectories across clusters. Eachtrajectory is plotted as the average monthly dose of the patients inthe cluster. The three largest clusters had markedly different opioiddose tapering trajectories and associated adverse events as shownin Table 3. The number of excess events represents the differencebetween the number of observed events and the number of eventsthat would have occurred if all the clusters had the same event rate.About 55% of patients were in cluster 1, characterised by very slowand steady tapering to a final dose about two-thirds of baseline,with low event rates and no reversal to pre-taper baseline dose.While clusters 2 and 10 looked quite similar in their baseline char-acteristics, they had very different taper trajectories. Cluster 2 wascharacterised by relatively rapid tapering to zero or very low doses,while cluster 10 was characterised by somewhat slower taperingfrom lower baseline doses to higher end doses. Both these clustershad slightly higher event rates than other clusters. Clusters 2 and10 also had more drug-related events than cluster 1 (mean 0.116and 0.128 versus 0.074), more mental health events (mean 0.089 and0.075 versus 0.058), and more deaths (mean 0.079 and 0.098 versus0.036) during the tapering year. However, compared to cluster 10,cluster 2 had higher baseline mean and median doses (192.3 and137.0 MME versus 140.3 and 104.0 MME), and a lower mean enddose (12.9 versus 37.6 MME). The slow trajectory for cluster 1, andthe very low or zero doses in clusters 2 and 10, continued intothe 15th month, although those months were not included in thespectral clustering analyses.The characteristics of the taper trajectories for all the clusters aredetailed in Table 4. The left panel in Fig. 3 shows the proportion ofpatients with 0 MME dose of opioids across the three clusters eachmonth, while the right panel shows the taper trajectory. Table 5shows the relative change in the proportion of patients who wereprescribed 0 MME opioids at each time point in the three clusters.Cluster 2 had the highest proportion of patients (73%) who werecompletely tapered off opioids at the end of 12 months, comparedto cluster 10 (66%) and cluster 1 (2%). Since cluster 1 demonstratedthe safest outcomes, we compared clusters 2 and 10 to cluster 1.The graph in the left panel in Fig. 3 shows that cluster 2 had a steepyet steady upward trend in the proportion of patients who weretaken off opioids, whereas patients in cluster 1 almost uniformlystayed on opioids, and cluster 10 demonstrated a pattern of delayeddiscontinuation.The remaining 1.3% of patients sorted into seven smaller clusters,all of which had patients who were tapered to or close to 0 MMEepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 1: Analysis FlowchartTable 1: Characteristics of the patients who taperedVariables Categories nGender Female 18,197Male 15,431Age Mean±Std. 58.0±11.6Number of Tapers 1 30,9322 2,462>=3 234Number of drug-related events before tapering 0 32,2381 1,182>=2 208Number of drug-related events after tapering 0 31,2101 1,8882 356>=3 174Number of mental health events before tapering 0 14,7881 3,9842 2,9493 2,0404 1,6655 1,2236 1,034>=7 5,945Number of mental health events after tapering 0 32,0411 1,0962 300>=3 191Spectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 2: Characteristics of Clusters 1, 2 and 10 in the pre-taper periodCluster No. patients Age Female benzodiazepines Alcohol Depression Drug Drug-related Mental Health Base dose(Mean) (% ) Rx (%) abuse (% ) (% ) abuse (% ) event event (Mean MME)counts (Mean) counts(Mean)1 16,965 58.74 55.7 28.9 2.4 31.7 16.6 0.04 3.81 189.822 13,025 56.96 53.1 30.1 3.0 31.4 16.5 0.05 4.03 192.3110 531 58.36 49.5 29.7 3.4 30.3 15.1 0.04 3.66 140.33Table 3: Adverse events after taper initiation in clusters 1, 2 and 10Cluster No. patients Drug-related No. Excess drug- Mental Health No. Excess Mental Deaths/1000 No. Excess(%) events/1000 related events events/1000 Health events Deaths1 16,965 (55%) 74.0 -320.2 58.4 -240.2 36.1 -329.82 13,025 (42%) 116.2 303.6 89.4 220.5 79.1 306.210 531 (< 2%) 128.1 18.7 75.3 1.5 97.9 22.5Table 4: Average monthly dose for 12 months from taper initiation - Taper TrajectoriesCluster BaseDose Mon1 Mon2 Mon3 Mon4 Mon5 Mon6 Mon7 Mon8 Mon9 Mon10 Mon11 Mon12 Taper Trajectory1 189.82 174.53 170.27 165.64 161.23 157.28 154.15 155.05 155.53 155.25 154.05 151.68 144.01 Very slow, no reversal2 192.31 175.19 157.04 139.42 119.01 96.06 75.19 59.71 45.49 33.53 23.35 15.18 12.90 Rapid, no reversal3 236.81 213.18 121.69 1.38 193.46 204.26 206.02 191.60 163.58 150.98 141.49 129.90 114.59 Very Rapid, complete reversal4 192.57 179.16 0.44 185.31 194.26 194.64 176.29 167.38 160.98 150.52 143.25 134.76 133.31 Very Rapid, complete reversal5 196.99 183.05 147.09 92.71 0.33 172.22 176.60 158.29 145.41 139.10 135.23 119.75 113.12 Very Rapid, complete reversal6 212.81 205.10 182.34 153.96 106.37 77.02 5.26 0.00 168.49 169.27 152.98 120.84 115.09 Very Rapid, complete reversal7 227.55 217.24 171.99 152.88 122.05 101.76 57.73 31.72 22.56 0.00 148.42 147.73 135.03 Rapid, partial reversal8 217.07 205.71 177.62 161.43 145.93 102.60 78.04 64.87 51.06 33.13 0.00 157.58 166.52 Rapid, partial reversal9 220.37 203.30 160.72 117.39 85.31 63.20 59.18 48.60 36.30 29.20 18.94 0.00 143.26 Rapid, partial reversal10 140.33 124.30 114.04 111.72 109.34 101.91 92.57 85.40 80.46 100.04 101.61 81.17 37.57 Erratic, no reversalFigure 2: The average monthly dose in MME for all the patients within each cluster.(not shown due to small cell size policy). In clusters 3, 4, and 5, dosetapering to near zero occurred very rapidly within 4 months afterinitiation, but the pre-taper dose was quickly restored and slowtapering was initiated instead. On the other hand, in clusters 6, 7, 8,and 9, rapid tapering occurred over a longer period of 6-11 months,but the taper was largely reversed and the subsequent trajectorywas truncated due to the cohort design. Drug-related event ratesand mental health event rates were quite variable across these smallclusters (data not shown), but in aggregate, the mental health eventrate of patients in these seven clusters was over twice that of cluster1 (mean 0.117 versus 0.058).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 3: The proportion of patients without opioids, i.e., with an average monthly dose of 0 MME, in the three clusters ofinterest and their corresponding tapering trajectories.Table 5: Relative change in the proportion of patients who were prescribed 0 MME opioids by monthMonth C1 Prop. C1 Relative C2 Prop. C2 Relative Diff.Relative C10 Prop. C10 Relative Diff. RelativePatients change Patients change changes C1 - C2 Patients change changes C1 - C102nd 0.007 0.058 0.0243rd 0.010 0.046 0.112 0.95 -0.49 0.038 0.54 -0.084th 0.013 -0.99 0.187 0.66 -1.65 0.056 0.50 -1.495th 0.015 0.13 0.287 0.54 -0.41 0.090 0.60 -0.476th 0.016 -0.98 0.378 0.32 -1.30 0.109 0.21 -1.197th 0.009 -0.46 0.454 0.20 -0.66 0.154 0.41 -0.878th 0.010 -0.99 0.530 0.17 -1.16 0.196 0.27 -1.269th 0.008 -0.21 0.597 0.13 -0.34 0.102 -0.48 0.2710th 0.008 -0.99 0.659 0.10 -1.10 0.098 -0.04 -0.9511th 0.007 -0.15 0.707 0.07 -0.22 0.358 2.65 -2.8012th 0.024 -0.98 0.733 0.04 -1.01 0.663 0.85 -1.83Relative change refers to the difference in the proportion of patients within the cluster between the current and the previous month.Negative value indicates that fewer patients were prescribed 0 MME opioid in the current month compared to the previous month. C1-Cluster 1; C2- Cluster 2; C10- Cluster 10.3 DISCUSSIONIn this large longitudinal cohort of patients with chronic pain receiv-ing high dose opioids at stable dosing for at least one year, spectralclustering analysis suggested wide variability in dose tapering pat-terns over the first year of tapering. These trajectories show notablevariation in the velocity and duration of tapering, post-taperingminimum doses and subsequent re-initiation (taper reversal) ofmoderate-to-high opioid doses, which was an unexpected finding.While the specific number of clusters is not important, the cohortsidentified were interesting and are discussed here. The largest clus-ter (cluster 1 with 55% of patients) was characterised by very slow,gradual tapering from a mean baseline dose of 190 MME to 144MME at 12 months, whereas the second largest cluster (cluster 2with 42% of patients) was characterised by quicker and steep taper-ing from a mean baseline dose of 192 MME to only 12.9 MME (with73% of patients discontinued). The latter cluster, unlike other clus-ters, had a substantial excess of both drug-related and mental healthevents after the initiation of tapering, suggesting that tapering pa-tients accustomed to high-dose prescription opioids to zero maybe associated with important health risks. Our results suggest thatthere is a significant subpopulation of patients receiving high-doseopioids for chronic pain who may not tolerate tapering to very lowdoses. Many of these patients may have had opioid use disorders;previous research in the OLDW has shown that such patients havebetter outcomes if treated with buprenorphine or methadone [ 45].There wasn’t any strong rationale to specify the number of clus-ters as we were looking for ‘interesting patterns’ which could seemSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAlike outliers compared to the rest of the data. Notably, spectral clus-tering identified previously unsuspected and unusual patterns inthe opioid dose management data. In particular, two small clusterswere characterised by rapid tapering to negligible or zero doses,followed by re-initiation of prescription opioids at moderately highdoses. These patterns merit further exploration as they stronglysuggest that reversal of tapering may be a marker of an unsuccess-ful tapering strategy and that clinicians can safely resume prioropioid doses for some of these patients. These patients with unsuc-cessful tapers need to be separated and studied alongside the groupof successful tapers rather than be combined as was done whenthis cohort was selected for analysis (See Data Cohort and AdverseEvents section). This suggests that the definition of a tapered cohortneeds to be re-visited and taper reversals be counted as an adverseevent. Our findings highlight the importance of considering the ve-locity of tapering, as suggested by Agnoli and colleagues’ research,along with the taper duration and post-tapering final dose as clin-icians attempt to devise safer dose tapering strategies to addressthe current opioid overdose epidemic in the US. Unsupervised datamining methods are powerful tools when the aim is to understandthe data better and see what may have been previously missed inhypothesis-driven studies. Lastly, unsupervised knowledge discov-ery research helps in extracting novel, unsuspected phenomenathat can be investigated using supervised methods. These methodsmay also challenge what was previously thought to be true; for ex-ample, by identifying previously unrecognised patterns of taperingreversal shown in Fig. 2.During the writing of this manuscript, another report was pub-lished that analysed trajectories in patients receiving long-termopioid therapy using based trajectory modeling (GBTM) [ 5]. Bin-swanger’s analysis identified five trajectories. From the clinicalperspective, this is interesting but is an oversimplification as itputs all tapering patients into two groups – one slightly decreas-ing (which they reassigned to the stable group) and one decreasing(which they compared with the stable group) but they did not clearlyidentify taper reversals, suggesting that all tapers are maintainedover time. We selected our cohort based on whether they taperedat some point but did not filter to select those with decreasing tra-jectories based on different velocities. Hence, it is quite plausibleto expect multiple groups. In addition to being fully exploratory,with no assumptions on what kind of trajectories to expect, ouranalysis focused on patients for whom a taper was pre-determinedto understand the different types and speeds of tapering. Therefore,our results support and facilitate future analyses comparing the out-comes of these different tapering approaches with the alternative ofnot tapering at all (a control group of non-tapers), which is a viableapproach but was not represented in our sample. Other notabledifference from Binswanger’s work is that we did not assume anydata properties such as distributions, number of anticipated clusters,etc. to run spectral clustering and our dataset is many times largerand representative of the entire population in the US. As we weresearching for subtle differences in a population that consists oftapering patients, in order to receive an amplified signal, we need alarge cohort and use methods that do not impose any assumptionson the input data or the results. This is exactly what knowledgediscovery is, i.e., where the scholar keeps an open mind about thekind of patterns/information that will emerge. Unlike Binswanger’sreport, we did not impose any restriction on the spectral cluster-ing algorithm. It was during the analysis of clusters to understandwhy the patients segregated as such, did we notice that the patternof the trajectories were the point of subtle difference and discussedthis in detail. This is work in progress as we will need to furtheranalyse these patterns using parametric methods and also studyother potential outcomes of such tapering patterns. For the purposeof knowledge discovery with no apriori information, we preferredan assumption-free approach with no apriori information beingimposed in any phase of the analysis. Furthermore, as we did nothave any prior knowledge of the underlying distribution patternsin this cohort, GBTM could have led us to incorrect results [ 28].GBTM relies heavily on prior information which, in essence, is adifferent approach than the one here which was to identify pat-terns that automatically emerge and would correlate with nuanceddifferences in an already tapering population.We acknowledge some limitations in our analyses such as un-known intent of the prescribing provider. For example, the physi-cian’s choice of a rapid or slow taper may be driven by unobservedcharacteristics of patients or their medical histories, which mayindependently contribute to the resulting outcomes. We were alsounable to distinguish patient-supported tapering from physician-demanded tapering and what may have triggered taper reversals.Finally, the current data do not capture illicit opioid use, sharingof opioids prescribed for other patients, or methadone adminis-tered in certified treatment programmes. Nevertheless, our studyis relevant to the research and clinical communities grapplingwith the opioid crisis. There is substantial interest in understand-ing factors contributing to the current epidemic of opioid-relatedoverdose deaths [ 15], reflected in several recent economic analy-ses on physician prescribing patterns and opioid abuse [ 18,22],statewide surveys and reports on prescribing practices and patientoutcomes [ 14,27,34], and studies of physician prescribing patternsand outcomes [ 19,36]. Previous studies of opioid dose tapering ei-ther used smaller, less nationally representative cohorts or relied onsupervised analytic methods, where an outcome is always defined,to identify patient characteristics that are associated with adverseoutcomes.4 CONCLUSIONOur objective was knowledge discovery, which was to identify hid-den, unsuspected patterns in claims data for patients with chronicpain. Since our analysis was performed using a large dataset that isrepresentative of the population of the United States these resultsare generalisable. The insights from this work will be used to extendthis work and guide predictive analysis. Our study also highlightsthe need for more detailed investigations to identify what patientfactors should be considered while suggesting a dose tapering regi-men. Dose tapering to discontinuation may plausibly increase therisk of subsequent opioid overdose if these opioid-dependent pa-tients seek alternative opioids from illicit sources or mix opioidswith other sedating drugs such as benzodiazepines, thereby negat-ing the purpose of dose tapering. We find these results, obtainedusing a data driven approach, to be compelling enough to warrantfurther investigations into dose tapering patterns to inform futurenational prescribing policies and clinical practice.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoACKNOWLEDGMENTSThe authors extend their sincere gratitude to Guibo Xing, ElizabethMagnan, Alicia Agnoli and Daniel Tancredi for data sharing aswell as members of the OptumLabs OLDW team for their valuableguidance.
277PStg6hh
Review of paper 1
3: Marginally above acceptance threshold
This paper aims to study tapering trajectories for patients on long-term opioid therapy. They use longitudinal health data from United HealthGroup and identify 33,620 patients who underwent dose tapering. They apply spectral clustering (a variant by John et al., 2019) to cluster the patients, using variables including their age, gender, monthly average opioid dose, mean baseline dose, tapering trajectory, and adverse events pre-tapering and after tapering initiation. They find 10 clusters and focus on the three largest ones, which exhibit different tapering trajectories and slightly different adverse outcomes, while looking mostly similar on baseline characteristics. Strengths + The problem is an important one to understand, i.e., the risks and benefits of dose tapering + The dataset is strong and appropriate for the study - they are able to identify over 33,000 patients with dose tapering and longitudinal data + It is interesting that there are different tapering trajectories discovered Weaknesses/suggestions - The goal of the work seems to be 1) to identify common tapering trajectories, 2) to learn the relationship between those trajectories and adverse outcomes. It's not clear to me, then, why clustering on all the variables – the baseline characteristics, the dose trajectory, and adverse events – is the right method here. Instead, would it make more sense to do something like, only cluster on dose trajectory, in order to answer (1), and then to fit a model to say, controlling for baseline characteristics, what is the effect of this kind of trajectory on adverse events, in order to answer (2)? - The comparison of adverse events across clusters is also difficult to interpret without confidence intervals (Table 3)
4: The reviewer is confident but not absolutely certain that the evaluation is correct
qkDCSV-RMt
KDD.org/2023/Workshop/epiDAMIK
2023
Spectral Clustering Identifies High-risk Opioid Tapering Trajectories Associated with Adverse Events
["MONIKA RAY", "Joshua J. Fenton", "Patrick Romano"]
National opioid prescribing guidelines and related quality measures have stimulated changes in opioid prescribing. Studies have shown that rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However, we do not know enough about the trajectories of dose tapering implemented in clinical practice, and how heterogeneous populations of patients respond to different treatments. Our aim was to examine prescribed opioid doses in a large, longitudinal, clinically diverse, national population of opioid-dependent patients with either Medicare or commercial insurance. We performed phenotype clustering to identify unsuspected, novel patterns in the data. In a longitudinal cohort (2008-2018) of 113,618 patients from the OptumLabs Data Warehouse with 12 consecutive months at a high, stable mean opioid dose ($\geq$50 morphine milligram equivalents), we identified 30,932 patients with one dose tapering phase that began at the first 60-day period with $\geq$15\% reduction in average daily dose across overlapping 60-day windows through seven months of follow-up. We applied spectral clustering as we preferred an assumption-free approach with no apriori information being imposed. Spectral clustering identified several cluster-cohorts, with three that included over 98\% of the sample. These three clusters were similar in baseline characteristics, but differed markedly in the magnitude, velocity, duration, and endpoint of tapering. The cluster-cohort characterised by moderately rapid, steady tapering, most often to an end opioid dose of zero, had excess drug-related events, mental health events, and deaths, compared with a cluster characterised by very slow, steady tapering with long-term opioid maintenance. Moderately rapid tapering to discontinuation may be associated with higher risk than slow tapering with longer-term maintenance of opioid analgesia. Furthermore, several clusters highlighted a cohort that had complete taper reversals indicating a treatment failure as the tapering was not maintained. Our findings suggests that identifying subtle yet clinically meaningful patterns in opioid prescribing data, such as patterns within the dose trajectories, can highlight the distinct characteristics separating subpopulations.
["high dose opioids", "spectral clustering", "patient subpopulations", "personalised medicine", "healthcare", "opioid crisis", "phenotype clustering"]
ABSTRACTNational opioid prescribing guidelines and related quality measureshave stimulated changes in opioid prescribing. Studies have shownthat rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However,there isn’t enough research on trajectories of dose tapering imple-mented in clinical practice, and how heterogeneous populations ofpatients respond to different treatments. Our aim was to examineprescribed opioid doses in a large, longitudinal, clinically diverse,national population of opioid-dependent patients with either Medi-care or commercial insurance. We performed phenotype clusteringto identify unsuspected, novel patterns in the data. In a longitu-dinal cohort (2008-2018) of 113,618 patients from the OptumLabsData Warehouse with 12 consecutive months at a high, stable meanopioid dose (≥50 morphine milligram equivalents), we identified30,932 patients with one dose tapering phase that began at the first60-day period with ≥15% reduction in average daily dose acrossoverlapping 60-day windows through seven months of follow-up.We applied spectral clustering as we preferred an assumption-freeapproach with no apriori information being imposed. Spectral clus-tering identified several cluster-cohorts, with three that includedover 98% of the sample. These three clusters were similar in baselinecharacteristics, but differed markedly in the magnitude, velocity, du-ration, and endpoint of tapering. The cluster-cohort characterisedby moderately rapid, steady tapering, most often to an end opioiddose of zero, had excess drug-related events, mental health events,and deaths, compared with a cluster characterised by very slow,steady tapering with long-term opioid maintenance. Moderatelyrapid tapering to discontinuation may be associated with higherrisk than slow tapering with longer-term maintenance of opioidanalgesia. Furthermore, several clusters highlighted a cohort thathad complete taper reversals indicating a treatment failure as thetapering was not maintained. Our findings suggest that identify-ing subtle yet clinically meaningful patterns in opioid prescribingdata, such as patterns within the dose trajectories, can highlightthe distinct characteristics separating subpopulations.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).CCS CONCEPTS•Applied computing →Health informatics ;Physical sciencesand engineering .KEYWORDShigh dose opioids, spectral clustering, patient subpopulations, phe-notype clustering, opioid crisisACM Reference Format:Monika Ray, Joshua J. Fenton, and Patrick S. Romano. 2023. Spectral Clus-tering Identifies High-risk Opioid Tapering Trajectories Associated with Ad-verse Events. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDD InternationalWorkshop on Epidemiology meets Data Mining and Knowledge Discovery,August 7, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 9 pages.1 INTRODUCTIONNational prescribing guidelines by the Centers for Disease Controland Prevention (CDC) and the current opioid overdose crisis haveled to substantial dose tapering among patients on long-term opioidtherapy for chronic pain, especially since 2016 [ 10,16,30]. A qualitymetric endorsed by the National Quality Forum (NQF) encouragesprescribers to reduce opioid doses below 90 morphine milligramequivalents (MME) per day [ 33]. In the setting of long-term opi-oid therapy for chronic pain, several studies have shown worseoutcomes associated with rapid dose reduction [ 1,13,17,41] anddose tapering has emerged as a complex issue for both physiciansand patients. To better inform evidence-based clinical practices,health system policies, and public programmes, it is necessary tocharacterise population heterogeneity (phenotype clustering) andto understand which patients are appropriate candidates for dif-ferent tapering approaches. This type of research requires a betterunderstanding of the variety of tapering trajectories that cliniciansimplement in diverse populations to enable comparisons of the risksand benefits of alternative approaches in relevant subpopulations.Large healthcare data warehouses that accumulate longitudinalrecords from multiple sources offer great opportunities for im-proved understanding of population heterogeneity in opioid dosemanagement.To undertake this research, we used retrospective data from theOptumLabs Data Warehouse (OLDW), which includes longitudinalhealth information for over 109 million commercial enrollees and12.5 million Medicare Advantage enrollees. We leveraged the ret-rospective cohort previously created by Agnoli and colleagues [ 1],whose prior research suggested that the peak tapering velocity hasepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. Romanoa significant mean effect on adverse outcomes. However, opioid-dependent patients with chronic pain often resist any dose reduc-tion, while pharmacies and regulators encourage dose reduction forevery eligible patient. To inform better clinical practice and policies,we need to understand how the peak tapering velocity fits into over-all patterns of opioid dose management over time, and then explorethe characteristics of higher- and lower-risk subpopulations of pa-tients undergoing dose tapering. For this purpose, we used spectralclustering to describe clinically meaningful subpopulations. Specif-ically, we wanted to examine similarities among patients withina cluster and differences among patients across clusters. Spectralclustering has been applied to speech processing, computer visionand exploratory data mining in biology [ 3,6,11,21,38,42], butopioid dosing is a novel and highly topical application in the currentera of increasing opioid-related overdose death rates [15].This work deviates from the popular hypothesis-driven approacheswhere the functional form of the models are independent predic-tors and dependent outcomes. In this data-driven approach theaim is to first cluster phenotypes, without classifying features asindependent or dependent variables, and then identify meaningfulsignatures within these clusters [ 25]. These signatures can then beused in predictive models as either predictors or outcomes. Themain purpose of phenotype clustering is to uncover hidden pat-terns. The primary focus of our exploratory work is see (1) how thepatients cluster based on their phenotypes (grouping patterns orphenotypes) and (2) whether these clusters have any remarkabledifferences (i.e., identify signatures that can be used in predictiveanalytics).1.1 Data Cohort and Adverse EventsWe obtained data from 2008-2018 for adults from the OptumLabsData Warehouse (OLDW) which contains de-identified adminis-trative claims data, including medical and pharmacy claims andeligibility information for commercial and Medicare Advantage en-rollees, representing a mixture of ages and regions across the UnitedStates. The entire cohort, which we received from Agnoli and col-leagues [ 1], had a stable baseline period of 12 consecutive monthsat a high opioid dose ≥50 MME, resulting in 113,618 patients. Thetapered cohort was defined as the subset of patients who had a dosetapering phase, which began on the first 60-day period with ≥15%reduction in average daily dose across overlapping 60-day windowsthrough the initial seven months of follow-up. Patients who had≥15% reduction in average daily dose over a longer time frame werenot included due to uncertainty about the intent of slight MMEdose reductions (which could be driven by delays in picking upprescriptions). To facilitate interpretation we selected a populationof patients who had only one period of tapering. Mortality in thetapered cohort was determined by analysing the time after taperinitiation and matching against the records in the OLDW mortalitytable.Adverse events included emergency department (ED) visits orhospitalisations for (1) drug or alcohol overdose or withdrawal(drug-related events); and (2) depression, anxiety, or suicide at-tempts (mental health events). Drug-related and mental healthevents were identified using International Classification of Diseases,Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codesfor claims from October 2015 through 2019 and ICD-9-CM diagnosiscodes for claims from 2008 through September 2015. Comorbiditieswere identified for all patients using the available software (AHRQ"Elixhauser" Comorbidity Software) in the OLDW [ 12,29]. Thisproject was determined by the University of California Office of thePresident to be exempt from human subjects review, as the OLDWuses completely de-identified, anonymised data.1.2 Analytic MethodsWe considered several methods to identify subpopulations and theircharacteristics such as K−Means clustering and latent class analy-sis (LCA).K−Means clustering is a popular clustering algorithmbut it is based on many restrictive assumptions, which most real-world datasets violate [ 20,35]. The algorithm operates on the inputdata matrix and, hence, is sensitive to the size of the data ( N) as wellas number of features. LCA [ 23,43], a type of finite mixture model,may be suitable for describing dose trajectories, but it requiresan outcome to be specified. By comparison, spectral clustering ispurely unsupervised and does not require outcome variables. Forour analyses, we used a novel spectral clustering algorithm (Spec-trum) developed by John and colleagues [ 21]. Spectral graph theoryassociates the spectrum of a matrix, i.e. eigenvalues of a matrix,to the properties of a graph via the Laplacian matrix [ 7,8,37]. Itoperates on graphs that are constructed between neighbouringnodes that represent data points (i.e., patients). It identifies arbitrar-ily shaped clusters (with convex or non-convex boundaries) usingthe eigenvectors in the Laplacian similarity matrix [ 7,9,26,46].A Laplacian similarity matrix models the local neighborhood rela-tionships between data points as an undirected graph [ 4,37,40].Spectral clustering is robust to the geometry of the clusters andoutliers, and does not require the user to specify the number ofclusters [ 2,24,46]. It identifies the number of clusters by comput-ing the differences between the consecutive ordered eigenvaluesof the graph Laplacian and identifying the first pair of consecutiveeigenvalues with the maximum difference in their values.The steps of spectral clustering include - (1) creation of the sim-ilarity matrix, then (2) the creation of the Laplacian matrix, andfinally (3) creation of clusters [ 32,44]. Variations of spectral clus-tering algorithms address issues related to creation of the similaritymatrix, graph-partitioning and speed on massive datasets. Sincespectral clustering operates on the Laplacian similarity matrix,which is an NxNmatrix ofNdata points, it is sensitive to the sizeof the data. The Spectrum algorithm developed by John et al., isnovel in the way it combines the following features - (1) combinedZelnik-Manor self-tuning [ 49], and the Zhang density-aware [ 50]kernels to create the similarity matrix, (2) Ng spectral clusteringmethod to estimate the optimal number of clusters [ 31], and Gauss-ian mixture modelling (GMM) [ 47] to finally cluster the data, and (3)a fast approximate spectral clustering (FASP) method [ 48] to allowfor fast clustering of massive data on regular desktop machines.The self-tuning component of the kernel adjusts to the scale ofthe data, while the density-aware component adapts to the localdensity of the data creating more or fewer connections dependingon the density of the regions. Spectrum uses the diffusion of tensorproduct graphs (TPG) to capture higher order information in thedata and highlight underlying patterns in the data [ 39]. The finalSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAclusters are plotted using the first two principal components, PC1and PC2. We did not use the eigen gap-statistic to determine thenumber of clusters as it was not essential for us to constrain thenumber of clusters nor were we against identifying small cohortsif the cohort had important patterns to investigate further. In ourwork, we were searching for anomalies or ‘interesting patterns’that could explain the underlying population heterogeneity. Theeigen gap heuristic works well if there are well-defined clustersbut not of much help when there are noisy or overlapping clusters,which is likely to be the case in this data.The variables in the input space of the spectral clustering algo-rithm were age, gender, monthly average opioid dose (MME), meanbaseline dose, count of drug-related events in the pre-taper and aftertapering initiation phases, the number of mental health events inthe pre-taper and after tapering initiation phases, benzodiazepinesco-prescription at baseline and at 30 days, 31 Elixhauser comor-bidity flags, and the change in dose across consecutive months for12 months. The number of drug-related and mental health eventswere identified for each patient before taper and after taper initi-ation as these were the adverse events of interest. We reviewedeach cluster to identify the prevalence of different adverse eventsas well as the number of deaths after taper initiation. We report thedistinguishing characteristics across the cluster subpopulations. Forcounterfactual inference, we identified the number and proportionof drug-related and mental health events in each cluster, and thencomputed the excess number of those events relative to the nullassumption of equal event risk across all clusters. The counterfac-tual calculation for each adverse event is given by - ExcessEvents =(NumEventsCluster )−(NumPatientsCluster ∗(TotalEventsTotalPatients)),where, for each adverse event, i.e., mortality, drug-related events ormental health events, ExcessEvents is the number of excess eventsin the cluster, NumEventsCluster is the number of observed eventswithin the cluster, NumPatientsCluster is the number of patients inthe cluster, TotalEvents is the total number of adverse events in theentire data and TotalPatients is the total number of patients in theanalysis.2 RESULTSAmong the 113,618 patients in the entire cohort 33,628 had one ormore phases of opioid dose tapering (29.5%) based on the taperingdefinition of≥15% reduction in average daily dose in 7-months offollow-up [ 1]. Fig. 1 shows the analytical pipeline and the resultantplot of the 10 clusters identified. We could not show all the tenclusters clearly in a 2-D plot. Since spectral clustering plots theclusters by collapsing them onto the first two principal components,the multi-dimensional aspect of the clusters is not visible. However,Fig. 1 shows that the clusters are not spherical and the data hasoutliers. Table 1 shows the characteristics of patients who tapered;the sample was 54% female and 92% had only one tapering periodavailable for analysis.Spectral clustering of 30,932 patients who underwent single ta-pers resulted in 10 clusters (groups of patients or subpopulations)with relatively similar baseline characteristics. All clusters hadpatients with high mean baseline doses of 140-237 MME/day. Ofparticular interest were the three large clusters and their baselinecharacteristics shown in Table 2. The other seven clusters’ charac-teristics are discussed below but not shown due to small cell sizepolicy. The three large clusters (1, 2, and 10) were very similar de-mographically, with mean ages of 58.7, 57.0, and 58.4 years, and 56%,53%, and 50% female composition, respectively. They were also sim-ilar on baseline co-prescribing of benzodiazepines (29%, 30%, and30%, respectively) and comorbid diagnoses during the baseline year,such as alcohol abuse and dependence (2%, 3%, and 2%, respectively),drug abuse and dependence (17%, 17%, and 15%, respectively), anddepression (32%, 31%, and 30%, respectively). Furthermore, theyhad similar medical experiences during their pre-taper period ofstable opioid dosing, with relatively few drug-related events (mean0.042, 0.053, and 0.043, respectively) and more mental health events(mean 3.81, 4.03, and 3.66, respectively).Fig. 2 compares the tapering trajectories across clusters. Eachtrajectory is plotted as the average monthly dose of the patients inthe cluster. The three largest clusters had markedly different opioiddose tapering trajectories and associated adverse events as shownin Table 3. The number of excess events represents the differencebetween the number of observed events and the number of eventsthat would have occurred if all the clusters had the same event rate.About 55% of patients were in cluster 1, characterised by very slowand steady tapering to a final dose about two-thirds of baseline,with low event rates and no reversal to pre-taper baseline dose.While clusters 2 and 10 looked quite similar in their baseline char-acteristics, they had very different taper trajectories. Cluster 2 wascharacterised by relatively rapid tapering to zero or very low doses,while cluster 10 was characterised by somewhat slower taperingfrom lower baseline doses to higher end doses. Both these clustershad slightly higher event rates than other clusters. Clusters 2 and10 also had more drug-related events than cluster 1 (mean 0.116and 0.128 versus 0.074), more mental health events (mean 0.089 and0.075 versus 0.058), and more deaths (mean 0.079 and 0.098 versus0.036) during the tapering year. However, compared to cluster 10,cluster 2 had higher baseline mean and median doses (192.3 and137.0 MME versus 140.3 and 104.0 MME), and a lower mean enddose (12.9 versus 37.6 MME). The slow trajectory for cluster 1, andthe very low or zero doses in clusters 2 and 10, continued intothe 15th month, although those months were not included in thespectral clustering analyses.The characteristics of the taper trajectories for all the clusters aredetailed in Table 4. The left panel in Fig. 3 shows the proportion ofpatients with 0 MME dose of opioids across the three clusters eachmonth, while the right panel shows the taper trajectory. Table 5shows the relative change in the proportion of patients who wereprescribed 0 MME opioids at each time point in the three clusters.Cluster 2 had the highest proportion of patients (73%) who werecompletely tapered off opioids at the end of 12 months, comparedto cluster 10 (66%) and cluster 1 (2%). Since cluster 1 demonstratedthe safest outcomes, we compared clusters 2 and 10 to cluster 1.The graph in the left panel in Fig. 3 shows that cluster 2 had a steepyet steady upward trend in the proportion of patients who weretaken off opioids, whereas patients in cluster 1 almost uniformlystayed on opioids, and cluster 10 demonstrated a pattern of delayeddiscontinuation.The remaining 1.3% of patients sorted into seven smaller clusters,all of which had patients who were tapered to or close to 0 MMEepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 1: Analysis FlowchartTable 1: Characteristics of the patients who taperedVariables Categories nGender Female 18,197Male 15,431Age Mean±Std. 58.0±11.6Number of Tapers 1 30,9322 2,462>=3 234Number of drug-related events before tapering 0 32,2381 1,182>=2 208Number of drug-related events after tapering 0 31,2101 1,8882 356>=3 174Number of mental health events before tapering 0 14,7881 3,9842 2,9493 2,0404 1,6655 1,2236 1,034>=7 5,945Number of mental health events after tapering 0 32,0411 1,0962 300>=3 191Spectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 2: Characteristics of Clusters 1, 2 and 10 in the pre-taper periodCluster No. patients Age Female benzodiazepines Alcohol Depression Drug Drug-related Mental Health Base dose(Mean) (% ) Rx (%) abuse (% ) (% ) abuse (% ) event event (Mean MME)counts (Mean) counts(Mean)1 16,965 58.74 55.7 28.9 2.4 31.7 16.6 0.04 3.81 189.822 13,025 56.96 53.1 30.1 3.0 31.4 16.5 0.05 4.03 192.3110 531 58.36 49.5 29.7 3.4 30.3 15.1 0.04 3.66 140.33Table 3: Adverse events after taper initiation in clusters 1, 2 and 10Cluster No. patients Drug-related No. Excess drug- Mental Health No. Excess Mental Deaths/1000 No. Excess(%) events/1000 related events events/1000 Health events Deaths1 16,965 (55%) 74.0 -320.2 58.4 -240.2 36.1 -329.82 13,025 (42%) 116.2 303.6 89.4 220.5 79.1 306.210 531 (< 2%) 128.1 18.7 75.3 1.5 97.9 22.5Table 4: Average monthly dose for 12 months from taper initiation - Taper TrajectoriesCluster BaseDose Mon1 Mon2 Mon3 Mon4 Mon5 Mon6 Mon7 Mon8 Mon9 Mon10 Mon11 Mon12 Taper Trajectory1 189.82 174.53 170.27 165.64 161.23 157.28 154.15 155.05 155.53 155.25 154.05 151.68 144.01 Very slow, no reversal2 192.31 175.19 157.04 139.42 119.01 96.06 75.19 59.71 45.49 33.53 23.35 15.18 12.90 Rapid, no reversal3 236.81 213.18 121.69 1.38 193.46 204.26 206.02 191.60 163.58 150.98 141.49 129.90 114.59 Very Rapid, complete reversal4 192.57 179.16 0.44 185.31 194.26 194.64 176.29 167.38 160.98 150.52 143.25 134.76 133.31 Very Rapid, complete reversal5 196.99 183.05 147.09 92.71 0.33 172.22 176.60 158.29 145.41 139.10 135.23 119.75 113.12 Very Rapid, complete reversal6 212.81 205.10 182.34 153.96 106.37 77.02 5.26 0.00 168.49 169.27 152.98 120.84 115.09 Very Rapid, complete reversal7 227.55 217.24 171.99 152.88 122.05 101.76 57.73 31.72 22.56 0.00 148.42 147.73 135.03 Rapid, partial reversal8 217.07 205.71 177.62 161.43 145.93 102.60 78.04 64.87 51.06 33.13 0.00 157.58 166.52 Rapid, partial reversal9 220.37 203.30 160.72 117.39 85.31 63.20 59.18 48.60 36.30 29.20 18.94 0.00 143.26 Rapid, partial reversal10 140.33 124.30 114.04 111.72 109.34 101.91 92.57 85.40 80.46 100.04 101.61 81.17 37.57 Erratic, no reversalFigure 2: The average monthly dose in MME for all the patients within each cluster.(not shown due to small cell size policy). In clusters 3, 4, and 5, dosetapering to near zero occurred very rapidly within 4 months afterinitiation, but the pre-taper dose was quickly restored and slowtapering was initiated instead. On the other hand, in clusters 6, 7, 8,and 9, rapid tapering occurred over a longer period of 6-11 months,but the taper was largely reversed and the subsequent trajectorywas truncated due to the cohort design. Drug-related event ratesand mental health event rates were quite variable across these smallclusters (data not shown), but in aggregate, the mental health eventrate of patients in these seven clusters was over twice that of cluster1 (mean 0.117 versus 0.058).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 3: The proportion of patients without opioids, i.e., with an average monthly dose of 0 MME, in the three clusters ofinterest and their corresponding tapering trajectories.Table 5: Relative change in the proportion of patients who were prescribed 0 MME opioids by monthMonth C1 Prop. C1 Relative C2 Prop. C2 Relative Diff.Relative C10 Prop. C10 Relative Diff. RelativePatients change Patients change changes C1 - C2 Patients change changes C1 - C102nd 0.007 0.058 0.0243rd 0.010 0.046 0.112 0.95 -0.49 0.038 0.54 -0.084th 0.013 -0.99 0.187 0.66 -1.65 0.056 0.50 -1.495th 0.015 0.13 0.287 0.54 -0.41 0.090 0.60 -0.476th 0.016 -0.98 0.378 0.32 -1.30 0.109 0.21 -1.197th 0.009 -0.46 0.454 0.20 -0.66 0.154 0.41 -0.878th 0.010 -0.99 0.530 0.17 -1.16 0.196 0.27 -1.269th 0.008 -0.21 0.597 0.13 -0.34 0.102 -0.48 0.2710th 0.008 -0.99 0.659 0.10 -1.10 0.098 -0.04 -0.9511th 0.007 -0.15 0.707 0.07 -0.22 0.358 2.65 -2.8012th 0.024 -0.98 0.733 0.04 -1.01 0.663 0.85 -1.83Relative change refers to the difference in the proportion of patients within the cluster between the current and the previous month.Negative value indicates that fewer patients were prescribed 0 MME opioid in the current month compared to the previous month. C1-Cluster 1; C2- Cluster 2; C10- Cluster 10.3 DISCUSSIONIn this large longitudinal cohort of patients with chronic pain receiv-ing high dose opioids at stable dosing for at least one year, spectralclustering analysis suggested wide variability in dose tapering pat-terns over the first year of tapering. These trajectories show notablevariation in the velocity and duration of tapering, post-taperingminimum doses and subsequent re-initiation (taper reversal) ofmoderate-to-high opioid doses, which was an unexpected finding.While the specific number of clusters is not important, the cohortsidentified were interesting and are discussed here. The largest clus-ter (cluster 1 with 55% of patients) was characterised by very slow,gradual tapering from a mean baseline dose of 190 MME to 144MME at 12 months, whereas the second largest cluster (cluster 2with 42% of patients) was characterised by quicker and steep taper-ing from a mean baseline dose of 192 MME to only 12.9 MME (with73% of patients discontinued). The latter cluster, unlike other clus-ters, had a substantial excess of both drug-related and mental healthevents after the initiation of tapering, suggesting that tapering pa-tients accustomed to high-dose prescription opioids to zero maybe associated with important health risks. Our results suggest thatthere is a significant subpopulation of patients receiving high-doseopioids for chronic pain who may not tolerate tapering to very lowdoses. Many of these patients may have had opioid use disorders;previous research in the OLDW has shown that such patients havebetter outcomes if treated with buprenorphine or methadone [ 45].There wasn’t any strong rationale to specify the number of clus-ters as we were looking for ‘interesting patterns’ which could seemSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAlike outliers compared to the rest of the data. Notably, spectral clus-tering identified previously unsuspected and unusual patterns inthe opioid dose management data. In particular, two small clusterswere characterised by rapid tapering to negligible or zero doses,followed by re-initiation of prescription opioids at moderately highdoses. These patterns merit further exploration as they stronglysuggest that reversal of tapering may be a marker of an unsuccess-ful tapering strategy and that clinicians can safely resume prioropioid doses for some of these patients. These patients with unsuc-cessful tapers need to be separated and studied alongside the groupof successful tapers rather than be combined as was done whenthis cohort was selected for analysis (See Data Cohort and AdverseEvents section). This suggests that the definition of a tapered cohortneeds to be re-visited and taper reversals be counted as an adverseevent. Our findings highlight the importance of considering the ve-locity of tapering, as suggested by Agnoli and colleagues’ research,along with the taper duration and post-tapering final dose as clin-icians attempt to devise safer dose tapering strategies to addressthe current opioid overdose epidemic in the US. Unsupervised datamining methods are powerful tools when the aim is to understandthe data better and see what may have been previously missed inhypothesis-driven studies. Lastly, unsupervised knowledge discov-ery research helps in extracting novel, unsuspected phenomenathat can be investigated using supervised methods. These methodsmay also challenge what was previously thought to be true; for ex-ample, by identifying previously unrecognised patterns of taperingreversal shown in Fig. 2.During the writing of this manuscript, another report was pub-lished that analysed trajectories in patients receiving long-termopioid therapy using based trajectory modeling (GBTM) [ 5]. Bin-swanger’s analysis identified five trajectories. From the clinicalperspective, this is interesting but is an oversimplification as itputs all tapering patients into two groups – one slightly decreas-ing (which they reassigned to the stable group) and one decreasing(which they compared with the stable group) but they did not clearlyidentify taper reversals, suggesting that all tapers are maintainedover time. We selected our cohort based on whether they taperedat some point but did not filter to select those with decreasing tra-jectories based on different velocities. Hence, it is quite plausibleto expect multiple groups. In addition to being fully exploratory,with no assumptions on what kind of trajectories to expect, ouranalysis focused on patients for whom a taper was pre-determinedto understand the different types and speeds of tapering. Therefore,our results support and facilitate future analyses comparing the out-comes of these different tapering approaches with the alternative ofnot tapering at all (a control group of non-tapers), which is a viableapproach but was not represented in our sample. Other notabledifference from Binswanger’s work is that we did not assume anydata properties such as distributions, number of anticipated clusters,etc. to run spectral clustering and our dataset is many times largerand representative of the entire population in the US. As we weresearching for subtle differences in a population that consists oftapering patients, in order to receive an amplified signal, we need alarge cohort and use methods that do not impose any assumptionson the input data or the results. This is exactly what knowledgediscovery is, i.e., where the scholar keeps an open mind about thekind of patterns/information that will emerge. Unlike Binswanger’sreport, we did not impose any restriction on the spectral cluster-ing algorithm. It was during the analysis of clusters to understandwhy the patients segregated as such, did we notice that the patternof the trajectories were the point of subtle difference and discussedthis in detail. This is work in progress as we will need to furtheranalyse these patterns using parametric methods and also studyother potential outcomes of such tapering patterns. For the purposeof knowledge discovery with no apriori information, we preferredan assumption-free approach with no apriori information beingimposed in any phase of the analysis. Furthermore, as we did nothave any prior knowledge of the underlying distribution patternsin this cohort, GBTM could have led us to incorrect results [ 28].GBTM relies heavily on prior information which, in essence, is adifferent approach than the one here which was to identify pat-terns that automatically emerge and would correlate with nuanceddifferences in an already tapering population.We acknowledge some limitations in our analyses such as un-known intent of the prescribing provider. For example, the physi-cian’s choice of a rapid or slow taper may be driven by unobservedcharacteristics of patients or their medical histories, which mayindependently contribute to the resulting outcomes. We were alsounable to distinguish patient-supported tapering from physician-demanded tapering and what may have triggered taper reversals.Finally, the current data do not capture illicit opioid use, sharingof opioids prescribed for other patients, or methadone adminis-tered in certified treatment programmes. Nevertheless, our studyis relevant to the research and clinical communities grapplingwith the opioid crisis. There is substantial interest in understand-ing factors contributing to the current epidemic of opioid-relatedoverdose deaths [ 15], reflected in several recent economic analy-ses on physician prescribing patterns and opioid abuse [ 18,22],statewide surveys and reports on prescribing practices and patientoutcomes [ 14,27,34], and studies of physician prescribing patternsand outcomes [ 19,36]. Previous studies of opioid dose tapering ei-ther used smaller, less nationally representative cohorts or relied onsupervised analytic methods, where an outcome is always defined,to identify patient characteristics that are associated with adverseoutcomes.4 CONCLUSIONOur objective was knowledge discovery, which was to identify hid-den, unsuspected patterns in claims data for patients with chronicpain. Since our analysis was performed using a large dataset that isrepresentative of the population of the United States these resultsare generalisable. The insights from this work will be used to extendthis work and guide predictive analysis. Our study also highlightsthe need for more detailed investigations to identify what patientfactors should be considered while suggesting a dose tapering regi-men. Dose tapering to discontinuation may plausibly increase therisk of subsequent opioid overdose if these opioid-dependent pa-tients seek alternative opioids from illicit sources or mix opioidswith other sedating drugs such as benzodiazepines, thereby negat-ing the purpose of dose tapering. We find these results, obtainedusing a data driven approach, to be compelling enough to warrantfurther investigations into dose tapering patterns to inform futurenational prescribing policies and clinical practice.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoACKNOWLEDGMENTSThe authors extend their sincere gratitude to Guibo Xing, ElizabethMagnan, Alicia Agnoli and Daniel Tancredi for data sharing aswell as members of the OptumLabs OLDW team for their valuableguidance.
pIWDwAVUjoy
An interesting approach that may require more supporting analysis
3: Marginally above acceptance threshold
In this paper, the authors studied the problem of identifying meaningful clinical patterns among patients who had been prescribed opioids and subsequently had the doses reduced over different lengths of time. Overall the paper has several strong aspects - It was able to identify several dosage patterns that maybe of interest towards clinical determination - The initial analysis seems to point towards differing health outcomes for patients with slow vs rapid tapering (see more below) - The paper covered sufficient details about cohort characteristics to let the reviewers judge the impact of the findings However, from a health economic outcome research aspect, the paper is currently at an early stage and may need further followups to support the validity of the identified patterns. The authors have acknowledged the limitation of not considering other factors that may capture the intent to reduce/increase dosing. However, this is a key aspect that may need to be validated, perhaps with certain assumptions such as IPW, to satisfy the significance of the findings. Further, the authors may want to considering survival analysis methods, especially with the possibility of right censored events, to further analyze the clinical outcomes of the identified cohorts. Overall, this papers has certain promises but may be improved upon from a modeling and analysis aspect.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
qkDCSV-RMt
KDD.org/2023/Workshop/epiDAMIK
2023
Spectral Clustering Identifies High-risk Opioid Tapering Trajectories Associated with Adverse Events
["MONIKA RAY", "Joshua J. Fenton", "Patrick Romano"]
National opioid prescribing guidelines and related quality measures have stimulated changes in opioid prescribing. Studies have shown that rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However, we do not know enough about the trajectories of dose tapering implemented in clinical practice, and how heterogeneous populations of patients respond to different treatments. Our aim was to examine prescribed opioid doses in a large, longitudinal, clinically diverse, national population of opioid-dependent patients with either Medicare or commercial insurance. We performed phenotype clustering to identify unsuspected, novel patterns in the data. In a longitudinal cohort (2008-2018) of 113,618 patients from the OptumLabs Data Warehouse with 12 consecutive months at a high, stable mean opioid dose ($\geq$50 morphine milligram equivalents), we identified 30,932 patients with one dose tapering phase that began at the first 60-day period with $\geq$15\% reduction in average daily dose across overlapping 60-day windows through seven months of follow-up. We applied spectral clustering as we preferred an assumption-free approach with no apriori information being imposed. Spectral clustering identified several cluster-cohorts, with three that included over 98\% of the sample. These three clusters were similar in baseline characteristics, but differed markedly in the magnitude, velocity, duration, and endpoint of tapering. The cluster-cohort characterised by moderately rapid, steady tapering, most often to an end opioid dose of zero, had excess drug-related events, mental health events, and deaths, compared with a cluster characterised by very slow, steady tapering with long-term opioid maintenance. Moderately rapid tapering to discontinuation may be associated with higher risk than slow tapering with longer-term maintenance of opioid analgesia. Furthermore, several clusters highlighted a cohort that had complete taper reversals indicating a treatment failure as the tapering was not maintained. Our findings suggests that identifying subtle yet clinically meaningful patterns in opioid prescribing data, such as patterns within the dose trajectories, can highlight the distinct characteristics separating subpopulations.
["high dose opioids", "spectral clustering", "patient subpopulations", "personalised medicine", "healthcare", "opioid crisis", "phenotype clustering"]
ABSTRACTNational opioid prescribing guidelines and related quality measureshave stimulated changes in opioid prescribing. Studies have shownthat rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However,there isn’t enough research on trajectories of dose tapering imple-mented in clinical practice, and how heterogeneous populations ofpatients respond to different treatments. Our aim was to examineprescribed opioid doses in a large, longitudinal, clinically diverse,national population of opioid-dependent patients with either Medi-care or commercial insurance. We performed phenotype clusteringto identify unsuspected, novel patterns in the data. In a longitu-dinal cohort (2008-2018) of 113,618 patients from the OptumLabsData Warehouse with 12 consecutive months at a high, stable meanopioid dose (≥50 morphine milligram equivalents), we identified30,932 patients with one dose tapering phase that began at the first60-day period with ≥15% reduction in average daily dose acrossoverlapping 60-day windows through seven months of follow-up.We applied spectral clustering as we preferred an assumption-freeapproach with no apriori information being imposed. Spectral clus-tering identified several cluster-cohorts, with three that includedover 98% of the sample. These three clusters were similar in baselinecharacteristics, but differed markedly in the magnitude, velocity, du-ration, and endpoint of tapering. The cluster-cohort characterisedby moderately rapid, steady tapering, most often to an end opioiddose of zero, had excess drug-related events, mental health events,and deaths, compared with a cluster characterised by very slow,steady tapering with long-term opioid maintenance. Moderatelyrapid tapering to discontinuation may be associated with higherrisk than slow tapering with longer-term maintenance of opioidanalgesia. Furthermore, several clusters highlighted a cohort thathad complete taper reversals indicating a treatment failure as thetapering was not maintained. Our findings suggest that identify-ing subtle yet clinically meaningful patterns in opioid prescribingdata, such as patterns within the dose trajectories, can highlightthe distinct characteristics separating subpopulations.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).CCS CONCEPTS•Applied computing →Health informatics ;Physical sciencesand engineering .KEYWORDShigh dose opioids, spectral clustering, patient subpopulations, phe-notype clustering, opioid crisisACM Reference Format:Monika Ray, Joshua J. Fenton, and Patrick S. Romano. 2023. Spectral Clus-tering Identifies High-risk Opioid Tapering Trajectories Associated with Ad-verse Events. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDD InternationalWorkshop on Epidemiology meets Data Mining and Knowledge Discovery,August 7, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 9 pages.1 INTRODUCTIONNational prescribing guidelines by the Centers for Disease Controland Prevention (CDC) and the current opioid overdose crisis haveled to substantial dose tapering among patients on long-term opioidtherapy for chronic pain, especially since 2016 [ 10,16,30]. A qualitymetric endorsed by the National Quality Forum (NQF) encouragesprescribers to reduce opioid doses below 90 morphine milligramequivalents (MME) per day [ 33]. In the setting of long-term opi-oid therapy for chronic pain, several studies have shown worseoutcomes associated with rapid dose reduction [ 1,13,17,41] anddose tapering has emerged as a complex issue for both physiciansand patients. To better inform evidence-based clinical practices,health system policies, and public programmes, it is necessary tocharacterise population heterogeneity (phenotype clustering) andto understand which patients are appropriate candidates for dif-ferent tapering approaches. This type of research requires a betterunderstanding of the variety of tapering trajectories that cliniciansimplement in diverse populations to enable comparisons of the risksand benefits of alternative approaches in relevant subpopulations.Large healthcare data warehouses that accumulate longitudinalrecords from multiple sources offer great opportunities for im-proved understanding of population heterogeneity in opioid dosemanagement.To undertake this research, we used retrospective data from theOptumLabs Data Warehouse (OLDW), which includes longitudinalhealth information for over 109 million commercial enrollees and12.5 million Medicare Advantage enrollees. We leveraged the ret-rospective cohort previously created by Agnoli and colleagues [ 1],whose prior research suggested that the peak tapering velocity hasepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. Romanoa significant mean effect on adverse outcomes. However, opioid-dependent patients with chronic pain often resist any dose reduc-tion, while pharmacies and regulators encourage dose reduction forevery eligible patient. To inform better clinical practice and policies,we need to understand how the peak tapering velocity fits into over-all patterns of opioid dose management over time, and then explorethe characteristics of higher- and lower-risk subpopulations of pa-tients undergoing dose tapering. For this purpose, we used spectralclustering to describe clinically meaningful subpopulations. Specif-ically, we wanted to examine similarities among patients withina cluster and differences among patients across clusters. Spectralclustering has been applied to speech processing, computer visionand exploratory data mining in biology [ 3,6,11,21,38,42], butopioid dosing is a novel and highly topical application in the currentera of increasing opioid-related overdose death rates [15].This work deviates from the popular hypothesis-driven approacheswhere the functional form of the models are independent predic-tors and dependent outcomes. In this data-driven approach theaim is to first cluster phenotypes, without classifying features asindependent or dependent variables, and then identify meaningfulsignatures within these clusters [ 25]. These signatures can then beused in predictive models as either predictors or outcomes. Themain purpose of phenotype clustering is to uncover hidden pat-terns. The primary focus of our exploratory work is see (1) how thepatients cluster based on their phenotypes (grouping patterns orphenotypes) and (2) whether these clusters have any remarkabledifferences (i.e., identify signatures that can be used in predictiveanalytics).1.1 Data Cohort and Adverse EventsWe obtained data from 2008-2018 for adults from the OptumLabsData Warehouse (OLDW) which contains de-identified adminis-trative claims data, including medical and pharmacy claims andeligibility information for commercial and Medicare Advantage en-rollees, representing a mixture of ages and regions across the UnitedStates. The entire cohort, which we received from Agnoli and col-leagues [ 1], had a stable baseline period of 12 consecutive monthsat a high opioid dose ≥50 MME, resulting in 113,618 patients. Thetapered cohort was defined as the subset of patients who had a dosetapering phase, which began on the first 60-day period with ≥15%reduction in average daily dose across overlapping 60-day windowsthrough the initial seven months of follow-up. Patients who had≥15% reduction in average daily dose over a longer time frame werenot included due to uncertainty about the intent of slight MMEdose reductions (which could be driven by delays in picking upprescriptions). To facilitate interpretation we selected a populationof patients who had only one period of tapering. Mortality in thetapered cohort was determined by analysing the time after taperinitiation and matching against the records in the OLDW mortalitytable.Adverse events included emergency department (ED) visits orhospitalisations for (1) drug or alcohol overdose or withdrawal(drug-related events); and (2) depression, anxiety, or suicide at-tempts (mental health events). Drug-related and mental healthevents were identified using International Classification of Diseases,Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codesfor claims from October 2015 through 2019 and ICD-9-CM diagnosiscodes for claims from 2008 through September 2015. Comorbiditieswere identified for all patients using the available software (AHRQ"Elixhauser" Comorbidity Software) in the OLDW [ 12,29]. Thisproject was determined by the University of California Office of thePresident to be exempt from human subjects review, as the OLDWuses completely de-identified, anonymised data.1.2 Analytic MethodsWe considered several methods to identify subpopulations and theircharacteristics such as K−Means clustering and latent class analy-sis (LCA).K−Means clustering is a popular clustering algorithmbut it is based on many restrictive assumptions, which most real-world datasets violate [ 20,35]. The algorithm operates on the inputdata matrix and, hence, is sensitive to the size of the data ( N) as wellas number of features. LCA [ 23,43], a type of finite mixture model,may be suitable for describing dose trajectories, but it requiresan outcome to be specified. By comparison, spectral clustering ispurely unsupervised and does not require outcome variables. Forour analyses, we used a novel spectral clustering algorithm (Spec-trum) developed by John and colleagues [ 21]. Spectral graph theoryassociates the spectrum of a matrix, i.e. eigenvalues of a matrix,to the properties of a graph via the Laplacian matrix [ 7,8,37]. Itoperates on graphs that are constructed between neighbouringnodes that represent data points (i.e., patients). It identifies arbitrar-ily shaped clusters (with convex or non-convex boundaries) usingthe eigenvectors in the Laplacian similarity matrix [ 7,9,26,46].A Laplacian similarity matrix models the local neighborhood rela-tionships between data points as an undirected graph [ 4,37,40].Spectral clustering is robust to the geometry of the clusters andoutliers, and does not require the user to specify the number ofclusters [ 2,24,46]. It identifies the number of clusters by comput-ing the differences between the consecutive ordered eigenvaluesof the graph Laplacian and identifying the first pair of consecutiveeigenvalues with the maximum difference in their values.The steps of spectral clustering include - (1) creation of the sim-ilarity matrix, then (2) the creation of the Laplacian matrix, andfinally (3) creation of clusters [ 32,44]. Variations of spectral clus-tering algorithms address issues related to creation of the similaritymatrix, graph-partitioning and speed on massive datasets. Sincespectral clustering operates on the Laplacian similarity matrix,which is an NxNmatrix ofNdata points, it is sensitive to the sizeof the data. The Spectrum algorithm developed by John et al., isnovel in the way it combines the following features - (1) combinedZelnik-Manor self-tuning [ 49], and the Zhang density-aware [ 50]kernels to create the similarity matrix, (2) Ng spectral clusteringmethod to estimate the optimal number of clusters [ 31], and Gauss-ian mixture modelling (GMM) [ 47] to finally cluster the data, and (3)a fast approximate spectral clustering (FASP) method [ 48] to allowfor fast clustering of massive data on regular desktop machines.The self-tuning component of the kernel adjusts to the scale ofthe data, while the density-aware component adapts to the localdensity of the data creating more or fewer connections dependingon the density of the regions. Spectrum uses the diffusion of tensorproduct graphs (TPG) to capture higher order information in thedata and highlight underlying patterns in the data [ 39]. The finalSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAclusters are plotted using the first two principal components, PC1and PC2. We did not use the eigen gap-statistic to determine thenumber of clusters as it was not essential for us to constrain thenumber of clusters nor were we against identifying small cohortsif the cohort had important patterns to investigate further. In ourwork, we were searching for anomalies or ‘interesting patterns’that could explain the underlying population heterogeneity. Theeigen gap heuristic works well if there are well-defined clustersbut not of much help when there are noisy or overlapping clusters,which is likely to be the case in this data.The variables in the input space of the spectral clustering algo-rithm were age, gender, monthly average opioid dose (MME), meanbaseline dose, count of drug-related events in the pre-taper and aftertapering initiation phases, the number of mental health events inthe pre-taper and after tapering initiation phases, benzodiazepinesco-prescription at baseline and at 30 days, 31 Elixhauser comor-bidity flags, and the change in dose across consecutive months for12 months. The number of drug-related and mental health eventswere identified for each patient before taper and after taper initi-ation as these were the adverse events of interest. We reviewedeach cluster to identify the prevalence of different adverse eventsas well as the number of deaths after taper initiation. We report thedistinguishing characteristics across the cluster subpopulations. Forcounterfactual inference, we identified the number and proportionof drug-related and mental health events in each cluster, and thencomputed the excess number of those events relative to the nullassumption of equal event risk across all clusters. The counterfac-tual calculation for each adverse event is given by - ExcessEvents =(NumEventsCluster )−(NumPatientsCluster ∗(TotalEventsTotalPatients)),where, for each adverse event, i.e., mortality, drug-related events ormental health events, ExcessEvents is the number of excess eventsin the cluster, NumEventsCluster is the number of observed eventswithin the cluster, NumPatientsCluster is the number of patients inthe cluster, TotalEvents is the total number of adverse events in theentire data and TotalPatients is the total number of patients in theanalysis.2 RESULTSAmong the 113,618 patients in the entire cohort 33,628 had one ormore phases of opioid dose tapering (29.5%) based on the taperingdefinition of≥15% reduction in average daily dose in 7-months offollow-up [ 1]. Fig. 1 shows the analytical pipeline and the resultantplot of the 10 clusters identified. We could not show all the tenclusters clearly in a 2-D plot. Since spectral clustering plots theclusters by collapsing them onto the first two principal components,the multi-dimensional aspect of the clusters is not visible. However,Fig. 1 shows that the clusters are not spherical and the data hasoutliers. Table 1 shows the characteristics of patients who tapered;the sample was 54% female and 92% had only one tapering periodavailable for analysis.Spectral clustering of 30,932 patients who underwent single ta-pers resulted in 10 clusters (groups of patients or subpopulations)with relatively similar baseline characteristics. All clusters hadpatients with high mean baseline doses of 140-237 MME/day. Ofparticular interest were the three large clusters and their baselinecharacteristics shown in Table 2. The other seven clusters’ charac-teristics are discussed below but not shown due to small cell sizepolicy. The three large clusters (1, 2, and 10) were very similar de-mographically, with mean ages of 58.7, 57.0, and 58.4 years, and 56%,53%, and 50% female composition, respectively. They were also sim-ilar on baseline co-prescribing of benzodiazepines (29%, 30%, and30%, respectively) and comorbid diagnoses during the baseline year,such as alcohol abuse and dependence (2%, 3%, and 2%, respectively),drug abuse and dependence (17%, 17%, and 15%, respectively), anddepression (32%, 31%, and 30%, respectively). Furthermore, theyhad similar medical experiences during their pre-taper period ofstable opioid dosing, with relatively few drug-related events (mean0.042, 0.053, and 0.043, respectively) and more mental health events(mean 3.81, 4.03, and 3.66, respectively).Fig. 2 compares the tapering trajectories across clusters. Eachtrajectory is plotted as the average monthly dose of the patients inthe cluster. The three largest clusters had markedly different opioiddose tapering trajectories and associated adverse events as shownin Table 3. The number of excess events represents the differencebetween the number of observed events and the number of eventsthat would have occurred if all the clusters had the same event rate.About 55% of patients were in cluster 1, characterised by very slowand steady tapering to a final dose about two-thirds of baseline,with low event rates and no reversal to pre-taper baseline dose.While clusters 2 and 10 looked quite similar in their baseline char-acteristics, they had very different taper trajectories. Cluster 2 wascharacterised by relatively rapid tapering to zero or very low doses,while cluster 10 was characterised by somewhat slower taperingfrom lower baseline doses to higher end doses. Both these clustershad slightly higher event rates than other clusters. Clusters 2 and10 also had more drug-related events than cluster 1 (mean 0.116and 0.128 versus 0.074), more mental health events (mean 0.089 and0.075 versus 0.058), and more deaths (mean 0.079 and 0.098 versus0.036) during the tapering year. However, compared to cluster 10,cluster 2 had higher baseline mean and median doses (192.3 and137.0 MME versus 140.3 and 104.0 MME), and a lower mean enddose (12.9 versus 37.6 MME). The slow trajectory for cluster 1, andthe very low or zero doses in clusters 2 and 10, continued intothe 15th month, although those months were not included in thespectral clustering analyses.The characteristics of the taper trajectories for all the clusters aredetailed in Table 4. The left panel in Fig. 3 shows the proportion ofpatients with 0 MME dose of opioids across the three clusters eachmonth, while the right panel shows the taper trajectory. Table 5shows the relative change in the proportion of patients who wereprescribed 0 MME opioids at each time point in the three clusters.Cluster 2 had the highest proportion of patients (73%) who werecompletely tapered off opioids at the end of 12 months, comparedto cluster 10 (66%) and cluster 1 (2%). Since cluster 1 demonstratedthe safest outcomes, we compared clusters 2 and 10 to cluster 1.The graph in the left panel in Fig. 3 shows that cluster 2 had a steepyet steady upward trend in the proportion of patients who weretaken off opioids, whereas patients in cluster 1 almost uniformlystayed on opioids, and cluster 10 demonstrated a pattern of delayeddiscontinuation.The remaining 1.3% of patients sorted into seven smaller clusters,all of which had patients who were tapered to or close to 0 MMEepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 1: Analysis FlowchartTable 1: Characteristics of the patients who taperedVariables Categories nGender Female 18,197Male 15,431Age Mean±Std. 58.0±11.6Number of Tapers 1 30,9322 2,462>=3 234Number of drug-related events before tapering 0 32,2381 1,182>=2 208Number of drug-related events after tapering 0 31,2101 1,8882 356>=3 174Number of mental health events before tapering 0 14,7881 3,9842 2,9493 2,0404 1,6655 1,2236 1,034>=7 5,945Number of mental health events after tapering 0 32,0411 1,0962 300>=3 191Spectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 2: Characteristics of Clusters 1, 2 and 10 in the pre-taper periodCluster No. patients Age Female benzodiazepines Alcohol Depression Drug Drug-related Mental Health Base dose(Mean) (% ) Rx (%) abuse (% ) (% ) abuse (% ) event event (Mean MME)counts (Mean) counts(Mean)1 16,965 58.74 55.7 28.9 2.4 31.7 16.6 0.04 3.81 189.822 13,025 56.96 53.1 30.1 3.0 31.4 16.5 0.05 4.03 192.3110 531 58.36 49.5 29.7 3.4 30.3 15.1 0.04 3.66 140.33Table 3: Adverse events after taper initiation in clusters 1, 2 and 10Cluster No. patients Drug-related No. Excess drug- Mental Health No. Excess Mental Deaths/1000 No. Excess(%) events/1000 related events events/1000 Health events Deaths1 16,965 (55%) 74.0 -320.2 58.4 -240.2 36.1 -329.82 13,025 (42%) 116.2 303.6 89.4 220.5 79.1 306.210 531 (< 2%) 128.1 18.7 75.3 1.5 97.9 22.5Table 4: Average monthly dose for 12 months from taper initiation - Taper TrajectoriesCluster BaseDose Mon1 Mon2 Mon3 Mon4 Mon5 Mon6 Mon7 Mon8 Mon9 Mon10 Mon11 Mon12 Taper Trajectory1 189.82 174.53 170.27 165.64 161.23 157.28 154.15 155.05 155.53 155.25 154.05 151.68 144.01 Very slow, no reversal2 192.31 175.19 157.04 139.42 119.01 96.06 75.19 59.71 45.49 33.53 23.35 15.18 12.90 Rapid, no reversal3 236.81 213.18 121.69 1.38 193.46 204.26 206.02 191.60 163.58 150.98 141.49 129.90 114.59 Very Rapid, complete reversal4 192.57 179.16 0.44 185.31 194.26 194.64 176.29 167.38 160.98 150.52 143.25 134.76 133.31 Very Rapid, complete reversal5 196.99 183.05 147.09 92.71 0.33 172.22 176.60 158.29 145.41 139.10 135.23 119.75 113.12 Very Rapid, complete reversal6 212.81 205.10 182.34 153.96 106.37 77.02 5.26 0.00 168.49 169.27 152.98 120.84 115.09 Very Rapid, complete reversal7 227.55 217.24 171.99 152.88 122.05 101.76 57.73 31.72 22.56 0.00 148.42 147.73 135.03 Rapid, partial reversal8 217.07 205.71 177.62 161.43 145.93 102.60 78.04 64.87 51.06 33.13 0.00 157.58 166.52 Rapid, partial reversal9 220.37 203.30 160.72 117.39 85.31 63.20 59.18 48.60 36.30 29.20 18.94 0.00 143.26 Rapid, partial reversal10 140.33 124.30 114.04 111.72 109.34 101.91 92.57 85.40 80.46 100.04 101.61 81.17 37.57 Erratic, no reversalFigure 2: The average monthly dose in MME for all the patients within each cluster.(not shown due to small cell size policy). In clusters 3, 4, and 5, dosetapering to near zero occurred very rapidly within 4 months afterinitiation, but the pre-taper dose was quickly restored and slowtapering was initiated instead. On the other hand, in clusters 6, 7, 8,and 9, rapid tapering occurred over a longer period of 6-11 months,but the taper was largely reversed and the subsequent trajectorywas truncated due to the cohort design. Drug-related event ratesand mental health event rates were quite variable across these smallclusters (data not shown), but in aggregate, the mental health eventrate of patients in these seven clusters was over twice that of cluster1 (mean 0.117 versus 0.058).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 3: The proportion of patients without opioids, i.e., with an average monthly dose of 0 MME, in the three clusters ofinterest and their corresponding tapering trajectories.Table 5: Relative change in the proportion of patients who were prescribed 0 MME opioids by monthMonth C1 Prop. C1 Relative C2 Prop. C2 Relative Diff.Relative C10 Prop. C10 Relative Diff. RelativePatients change Patients change changes C1 - C2 Patients change changes C1 - C102nd 0.007 0.058 0.0243rd 0.010 0.046 0.112 0.95 -0.49 0.038 0.54 -0.084th 0.013 -0.99 0.187 0.66 -1.65 0.056 0.50 -1.495th 0.015 0.13 0.287 0.54 -0.41 0.090 0.60 -0.476th 0.016 -0.98 0.378 0.32 -1.30 0.109 0.21 -1.197th 0.009 -0.46 0.454 0.20 -0.66 0.154 0.41 -0.878th 0.010 -0.99 0.530 0.17 -1.16 0.196 0.27 -1.269th 0.008 -0.21 0.597 0.13 -0.34 0.102 -0.48 0.2710th 0.008 -0.99 0.659 0.10 -1.10 0.098 -0.04 -0.9511th 0.007 -0.15 0.707 0.07 -0.22 0.358 2.65 -2.8012th 0.024 -0.98 0.733 0.04 -1.01 0.663 0.85 -1.83Relative change refers to the difference in the proportion of patients within the cluster between the current and the previous month.Negative value indicates that fewer patients were prescribed 0 MME opioid in the current month compared to the previous month. C1-Cluster 1; C2- Cluster 2; C10- Cluster 10.3 DISCUSSIONIn this large longitudinal cohort of patients with chronic pain receiv-ing high dose opioids at stable dosing for at least one year, spectralclustering analysis suggested wide variability in dose tapering pat-terns over the first year of tapering. These trajectories show notablevariation in the velocity and duration of tapering, post-taperingminimum doses and subsequent re-initiation (taper reversal) ofmoderate-to-high opioid doses, which was an unexpected finding.While the specific number of clusters is not important, the cohortsidentified were interesting and are discussed here. The largest clus-ter (cluster 1 with 55% of patients) was characterised by very slow,gradual tapering from a mean baseline dose of 190 MME to 144MME at 12 months, whereas the second largest cluster (cluster 2with 42% of patients) was characterised by quicker and steep taper-ing from a mean baseline dose of 192 MME to only 12.9 MME (with73% of patients discontinued). The latter cluster, unlike other clus-ters, had a substantial excess of both drug-related and mental healthevents after the initiation of tapering, suggesting that tapering pa-tients accustomed to high-dose prescription opioids to zero maybe associated with important health risks. Our results suggest thatthere is a significant subpopulation of patients receiving high-doseopioids for chronic pain who may not tolerate tapering to very lowdoses. Many of these patients may have had opioid use disorders;previous research in the OLDW has shown that such patients havebetter outcomes if treated with buprenorphine or methadone [ 45].There wasn’t any strong rationale to specify the number of clus-ters as we were looking for ‘interesting patterns’ which could seemSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAlike outliers compared to the rest of the data. Notably, spectral clus-tering identified previously unsuspected and unusual patterns inthe opioid dose management data. In particular, two small clusterswere characterised by rapid tapering to negligible or zero doses,followed by re-initiation of prescription opioids at moderately highdoses. These patterns merit further exploration as they stronglysuggest that reversal of tapering may be a marker of an unsuccess-ful tapering strategy and that clinicians can safely resume prioropioid doses for some of these patients. These patients with unsuc-cessful tapers need to be separated and studied alongside the groupof successful tapers rather than be combined as was done whenthis cohort was selected for analysis (See Data Cohort and AdverseEvents section). This suggests that the definition of a tapered cohortneeds to be re-visited and taper reversals be counted as an adverseevent. Our findings highlight the importance of considering the ve-locity of tapering, as suggested by Agnoli and colleagues’ research,along with the taper duration and post-tapering final dose as clin-icians attempt to devise safer dose tapering strategies to addressthe current opioid overdose epidemic in the US. Unsupervised datamining methods are powerful tools when the aim is to understandthe data better and see what may have been previously missed inhypothesis-driven studies. Lastly, unsupervised knowledge discov-ery research helps in extracting novel, unsuspected phenomenathat can be investigated using supervised methods. These methodsmay also challenge what was previously thought to be true; for ex-ample, by identifying previously unrecognised patterns of taperingreversal shown in Fig. 2.During the writing of this manuscript, another report was pub-lished that analysed trajectories in patients receiving long-termopioid therapy using based trajectory modeling (GBTM) [ 5]. Bin-swanger’s analysis identified five trajectories. From the clinicalperspective, this is interesting but is an oversimplification as itputs all tapering patients into two groups – one slightly decreas-ing (which they reassigned to the stable group) and one decreasing(which they compared with the stable group) but they did not clearlyidentify taper reversals, suggesting that all tapers are maintainedover time. We selected our cohort based on whether they taperedat some point but did not filter to select those with decreasing tra-jectories based on different velocities. Hence, it is quite plausibleto expect multiple groups. In addition to being fully exploratory,with no assumptions on what kind of trajectories to expect, ouranalysis focused on patients for whom a taper was pre-determinedto understand the different types and speeds of tapering. Therefore,our results support and facilitate future analyses comparing the out-comes of these different tapering approaches with the alternative ofnot tapering at all (a control group of non-tapers), which is a viableapproach but was not represented in our sample. Other notabledifference from Binswanger’s work is that we did not assume anydata properties such as distributions, number of anticipated clusters,etc. to run spectral clustering and our dataset is many times largerand representative of the entire population in the US. As we weresearching for subtle differences in a population that consists oftapering patients, in order to receive an amplified signal, we need alarge cohort and use methods that do not impose any assumptionson the input data or the results. This is exactly what knowledgediscovery is, i.e., where the scholar keeps an open mind about thekind of patterns/information that will emerge. Unlike Binswanger’sreport, we did not impose any restriction on the spectral cluster-ing algorithm. It was during the analysis of clusters to understandwhy the patients segregated as such, did we notice that the patternof the trajectories were the point of subtle difference and discussedthis in detail. This is work in progress as we will need to furtheranalyse these patterns using parametric methods and also studyother potential outcomes of such tapering patterns. For the purposeof knowledge discovery with no apriori information, we preferredan assumption-free approach with no apriori information beingimposed in any phase of the analysis. Furthermore, as we did nothave any prior knowledge of the underlying distribution patternsin this cohort, GBTM could have led us to incorrect results [ 28].GBTM relies heavily on prior information which, in essence, is adifferent approach than the one here which was to identify pat-terns that automatically emerge and would correlate with nuanceddifferences in an already tapering population.We acknowledge some limitations in our analyses such as un-known intent of the prescribing provider. For example, the physi-cian’s choice of a rapid or slow taper may be driven by unobservedcharacteristics of patients or their medical histories, which mayindependently contribute to the resulting outcomes. We were alsounable to distinguish patient-supported tapering from physician-demanded tapering and what may have triggered taper reversals.Finally, the current data do not capture illicit opioid use, sharingof opioids prescribed for other patients, or methadone adminis-tered in certified treatment programmes. Nevertheless, our studyis relevant to the research and clinical communities grapplingwith the opioid crisis. There is substantial interest in understand-ing factors contributing to the current epidemic of opioid-relatedoverdose deaths [ 15], reflected in several recent economic analy-ses on physician prescribing patterns and opioid abuse [ 18,22],statewide surveys and reports on prescribing practices and patientoutcomes [ 14,27,34], and studies of physician prescribing patternsand outcomes [ 19,36]. Previous studies of opioid dose tapering ei-ther used smaller, less nationally representative cohorts or relied onsupervised analytic methods, where an outcome is always defined,to identify patient characteristics that are associated with adverseoutcomes.4 CONCLUSIONOur objective was knowledge discovery, which was to identify hid-den, unsuspected patterns in claims data for patients with chronicpain. Since our analysis was performed using a large dataset that isrepresentative of the population of the United States these resultsare generalisable. The insights from this work will be used to extendthis work and guide predictive analysis. Our study also highlightsthe need for more detailed investigations to identify what patientfactors should be considered while suggesting a dose tapering regi-men. Dose tapering to discontinuation may plausibly increase therisk of subsequent opioid overdose if these opioid-dependent pa-tients seek alternative opioids from illicit sources or mix opioidswith other sedating drugs such as benzodiazepines, thereby negat-ing the purpose of dose tapering. We find these results, obtainedusing a data driven approach, to be compelling enough to warrantfurther investigations into dose tapering patterns to inform futurenational prescribing policies and clinical practice.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoACKNOWLEDGMENTSThe authors extend their sincere gratitude to Guibo Xing, ElizabethMagnan, Alicia Agnoli and Daniel Tancredi for data sharing aswell as members of the OptumLabs OLDW team for their valuableguidance.
JZmlmsSvTP-
Spectral Clustering Identifies High-risk Opioid Tapering Trajectories Associated with Adverse Outcomes- Review
3: Marginally above acceptance threshold
**Summary:** This work explores the characteristics of the prescribed opioid doses in a diverse population of opioid-dependant patients with appropriate insurance and employer of licensed physicians. Namely, the work applies different clustering methods to identify patterns in the data. The dataset used in this work was from the claims data about patients with chronic pain obtained from the largest commercial insurance company and the largest private employer of physicians in the United States. The authors justify that spectral clustering might be a more suitable unsupervised claustering algorithm compared to other clustering methods and thus use that to construct clusters. The authors then explored the characteristics of the patients present in each of these clusters by defining counterfactual terms and other evaluation metrics. Notably, the work discovered different observations related to each of the clusters. **Strong Points:** + The work provides valuable insights into the characteristics related to patient behaviour towards opioid prescriptions. + The fact that the authors closely describe the remotely relevant prior study (GBTM) and accurately describe the distinctions between that study and the current work attest to the specific contribution of this work. + The work provides extensive summaries of patients across the different clusters over pre-taper and taper initiation and post-taper timestamps. This provides important insights about characterizing patient dose trajectories. **Weak Points:** + The spectral clustering algorithm used in this work (Spectrum) is a readily available R package. As the authors do not necessarily create this algorithm, it seems pointless to spend an entire subsection on the given algorithm as the characteristic comparison with other clustering algorithms is not unique as well. + I am not sure if the number of clusters given in this work is just like making a mountain out of a molehill. This is because based on Figure 1, there seems to mainly be 2-3 broad clusters. Although the authors provide intuitions for the other clusters, it seems that clusters 3,4 and 5 have very similar trends based on Figure 2. It may be more informative if the authors provided the values of the Eigen-Gap statistic for each value of K where K denotes the number of clusters. Maybe GBTM is not as much of an oversimplification as the authors make it to be. **Minor Suggestions:** + Figure 1 needs to be broken down into 2 separate plots. 1 plot for the 3 main clusters and the other for the other clusters. Otherwise the smaller clusters are not even visible. + Minor grammatical errors seem to be present, like in line 737, it should probable be "Finally, the current data *does* not ...".
3: The reviewer is fairly confident that the evaluation is correct
qkDCSV-RMt
KDD.org/2023/Workshop/epiDAMIK
2023
Spectral Clustering Identifies High-risk Opioid Tapering Trajectories Associated with Adverse Events
["MONIKA RAY", "Joshua J. Fenton", "Patrick Romano"]
National opioid prescribing guidelines and related quality measures have stimulated changes in opioid prescribing. Studies have shown that rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However, we do not know enough about the trajectories of dose tapering implemented in clinical practice, and how heterogeneous populations of patients respond to different treatments. Our aim was to examine prescribed opioid doses in a large, longitudinal, clinically diverse, national population of opioid-dependent patients with either Medicare or commercial insurance. We performed phenotype clustering to identify unsuspected, novel patterns in the data. In a longitudinal cohort (2008-2018) of 113,618 patients from the OptumLabs Data Warehouse with 12 consecutive months at a high, stable mean opioid dose ($\geq$50 morphine milligram equivalents), we identified 30,932 patients with one dose tapering phase that began at the first 60-day period with $\geq$15\% reduction in average daily dose across overlapping 60-day windows through seven months of follow-up. We applied spectral clustering as we preferred an assumption-free approach with no apriori information being imposed. Spectral clustering identified several cluster-cohorts, with three that included over 98\% of the sample. These three clusters were similar in baseline characteristics, but differed markedly in the magnitude, velocity, duration, and endpoint of tapering. The cluster-cohort characterised by moderately rapid, steady tapering, most often to an end opioid dose of zero, had excess drug-related events, mental health events, and deaths, compared with a cluster characterised by very slow, steady tapering with long-term opioid maintenance. Moderately rapid tapering to discontinuation may be associated with higher risk than slow tapering with longer-term maintenance of opioid analgesia. Furthermore, several clusters highlighted a cohort that had complete taper reversals indicating a treatment failure as the tapering was not maintained. Our findings suggests that identifying subtle yet clinically meaningful patterns in opioid prescribing data, such as patterns within the dose trajectories, can highlight the distinct characteristics separating subpopulations.
["high dose opioids", "spectral clustering", "patient subpopulations", "personalised medicine", "healthcare", "opioid crisis", "phenotype clustering"]
ABSTRACTNational opioid prescribing guidelines and related quality measureshave stimulated changes in opioid prescribing. Studies have shownthat rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However,there isn’t enough research on trajectories of dose tapering imple-mented in clinical practice, and how heterogeneous populations ofpatients respond to different treatments. Our aim was to examineprescribed opioid doses in a large, longitudinal, clinically diverse,national population of opioid-dependent patients with either Medi-care or commercial insurance. We performed phenotype clusteringto identify unsuspected, novel patterns in the data. In a longitu-dinal cohort (2008-2018) of 113,618 patients from the OptumLabsData Warehouse with 12 consecutive months at a high, stable meanopioid dose (≥50 morphine milligram equivalents), we identified30,932 patients with one dose tapering phase that began at the first60-day period with ≥15% reduction in average daily dose acrossoverlapping 60-day windows through seven months of follow-up.We applied spectral clustering as we preferred an assumption-freeapproach with no apriori information being imposed. Spectral clus-tering identified several cluster-cohorts, with three that includedover 98% of the sample. These three clusters were similar in baselinecharacteristics, but differed markedly in the magnitude, velocity, du-ration, and endpoint of tapering. The cluster-cohort characterisedby moderately rapid, steady tapering, most often to an end opioiddose of zero, had excess drug-related events, mental health events,and deaths, compared with a cluster characterised by very slow,steady tapering with long-term opioid maintenance. Moderatelyrapid tapering to discontinuation may be associated with higherrisk than slow tapering with longer-term maintenance of opioidanalgesia. Furthermore, several clusters highlighted a cohort thathad complete taper reversals indicating a treatment failure as thetapering was not maintained. Our findings suggest that identify-ing subtle yet clinically meaningful patterns in opioid prescribingdata, such as patterns within the dose trajectories, can highlightthe distinct characteristics separating subpopulations.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).CCS CONCEPTS•Applied computing →Health informatics ;Physical sciencesand engineering .KEYWORDShigh dose opioids, spectral clustering, patient subpopulations, phe-notype clustering, opioid crisisACM Reference Format:Monika Ray, Joshua J. Fenton, and Patrick S. Romano. 2023. Spectral Clus-tering Identifies High-risk Opioid Tapering Trajectories Associated with Ad-verse Events. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDD InternationalWorkshop on Epidemiology meets Data Mining and Knowledge Discovery,August 7, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 9 pages.1 INTRODUCTIONNational prescribing guidelines by the Centers for Disease Controland Prevention (CDC) and the current opioid overdose crisis haveled to substantial dose tapering among patients on long-term opioidtherapy for chronic pain, especially since 2016 [ 10,16,30]. A qualitymetric endorsed by the National Quality Forum (NQF) encouragesprescribers to reduce opioid doses below 90 morphine milligramequivalents (MME) per day [ 33]. In the setting of long-term opi-oid therapy for chronic pain, several studies have shown worseoutcomes associated with rapid dose reduction [ 1,13,17,41] anddose tapering has emerged as a complex issue for both physiciansand patients. To better inform evidence-based clinical practices,health system policies, and public programmes, it is necessary tocharacterise population heterogeneity (phenotype clustering) andto understand which patients are appropriate candidates for dif-ferent tapering approaches. This type of research requires a betterunderstanding of the variety of tapering trajectories that cliniciansimplement in diverse populations to enable comparisons of the risksand benefits of alternative approaches in relevant subpopulations.Large healthcare data warehouses that accumulate longitudinalrecords from multiple sources offer great opportunities for im-proved understanding of population heterogeneity in opioid dosemanagement.To undertake this research, we used retrospective data from theOptumLabs Data Warehouse (OLDW), which includes longitudinalhealth information for over 109 million commercial enrollees and12.5 million Medicare Advantage enrollees. We leveraged the ret-rospective cohort previously created by Agnoli and colleagues [ 1],whose prior research suggested that the peak tapering velocity hasepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. Romanoa significant mean effect on adverse outcomes. However, opioid-dependent patients with chronic pain often resist any dose reduc-tion, while pharmacies and regulators encourage dose reduction forevery eligible patient. To inform better clinical practice and policies,we need to understand how the peak tapering velocity fits into over-all patterns of opioid dose management over time, and then explorethe characteristics of higher- and lower-risk subpopulations of pa-tients undergoing dose tapering. For this purpose, we used spectralclustering to describe clinically meaningful subpopulations. Specif-ically, we wanted to examine similarities among patients withina cluster and differences among patients across clusters. Spectralclustering has been applied to speech processing, computer visionand exploratory data mining in biology [ 3,6,11,21,38,42], butopioid dosing is a novel and highly topical application in the currentera of increasing opioid-related overdose death rates [15].This work deviates from the popular hypothesis-driven approacheswhere the functional form of the models are independent predic-tors and dependent outcomes. In this data-driven approach theaim is to first cluster phenotypes, without classifying features asindependent or dependent variables, and then identify meaningfulsignatures within these clusters [ 25]. These signatures can then beused in predictive models as either predictors or outcomes. Themain purpose of phenotype clustering is to uncover hidden pat-terns. The primary focus of our exploratory work is see (1) how thepatients cluster based on their phenotypes (grouping patterns orphenotypes) and (2) whether these clusters have any remarkabledifferences (i.e., identify signatures that can be used in predictiveanalytics).1.1 Data Cohort and Adverse EventsWe obtained data from 2008-2018 for adults from the OptumLabsData Warehouse (OLDW) which contains de-identified adminis-trative claims data, including medical and pharmacy claims andeligibility information for commercial and Medicare Advantage en-rollees, representing a mixture of ages and regions across the UnitedStates. The entire cohort, which we received from Agnoli and col-leagues [ 1], had a stable baseline period of 12 consecutive monthsat a high opioid dose ≥50 MME, resulting in 113,618 patients. Thetapered cohort was defined as the subset of patients who had a dosetapering phase, which began on the first 60-day period with ≥15%reduction in average daily dose across overlapping 60-day windowsthrough the initial seven months of follow-up. Patients who had≥15% reduction in average daily dose over a longer time frame werenot included due to uncertainty about the intent of slight MMEdose reductions (which could be driven by delays in picking upprescriptions). To facilitate interpretation we selected a populationof patients who had only one period of tapering. Mortality in thetapered cohort was determined by analysing the time after taperinitiation and matching against the records in the OLDW mortalitytable.Adverse events included emergency department (ED) visits orhospitalisations for (1) drug or alcohol overdose or withdrawal(drug-related events); and (2) depression, anxiety, or suicide at-tempts (mental health events). Drug-related and mental healthevents were identified using International Classification of Diseases,Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codesfor claims from October 2015 through 2019 and ICD-9-CM diagnosiscodes for claims from 2008 through September 2015. Comorbiditieswere identified for all patients using the available software (AHRQ"Elixhauser" Comorbidity Software) in the OLDW [ 12,29]. Thisproject was determined by the University of California Office of thePresident to be exempt from human subjects review, as the OLDWuses completely de-identified, anonymised data.1.2 Analytic MethodsWe considered several methods to identify subpopulations and theircharacteristics such as K−Means clustering and latent class analy-sis (LCA).K−Means clustering is a popular clustering algorithmbut it is based on many restrictive assumptions, which most real-world datasets violate [ 20,35]. The algorithm operates on the inputdata matrix and, hence, is sensitive to the size of the data ( N) as wellas number of features. LCA [ 23,43], a type of finite mixture model,may be suitable for describing dose trajectories, but it requiresan outcome to be specified. By comparison, spectral clustering ispurely unsupervised and does not require outcome variables. Forour analyses, we used a novel spectral clustering algorithm (Spec-trum) developed by John and colleagues [ 21]. Spectral graph theoryassociates the spectrum of a matrix, i.e. eigenvalues of a matrix,to the properties of a graph via the Laplacian matrix [ 7,8,37]. Itoperates on graphs that are constructed between neighbouringnodes that represent data points (i.e., patients). It identifies arbitrar-ily shaped clusters (with convex or non-convex boundaries) usingthe eigenvectors in the Laplacian similarity matrix [ 7,9,26,46].A Laplacian similarity matrix models the local neighborhood rela-tionships between data points as an undirected graph [ 4,37,40].Spectral clustering is robust to the geometry of the clusters andoutliers, and does not require the user to specify the number ofclusters [ 2,24,46]. It identifies the number of clusters by comput-ing the differences between the consecutive ordered eigenvaluesof the graph Laplacian and identifying the first pair of consecutiveeigenvalues with the maximum difference in their values.The steps of spectral clustering include - (1) creation of the sim-ilarity matrix, then (2) the creation of the Laplacian matrix, andfinally (3) creation of clusters [ 32,44]. Variations of spectral clus-tering algorithms address issues related to creation of the similaritymatrix, graph-partitioning and speed on massive datasets. Sincespectral clustering operates on the Laplacian similarity matrix,which is an NxNmatrix ofNdata points, it is sensitive to the sizeof the data. The Spectrum algorithm developed by John et al., isnovel in the way it combines the following features - (1) combinedZelnik-Manor self-tuning [ 49], and the Zhang density-aware [ 50]kernels to create the similarity matrix, (2) Ng spectral clusteringmethod to estimate the optimal number of clusters [ 31], and Gauss-ian mixture modelling (GMM) [ 47] to finally cluster the data, and (3)a fast approximate spectral clustering (FASP) method [ 48] to allowfor fast clustering of massive data on regular desktop machines.The self-tuning component of the kernel adjusts to the scale ofthe data, while the density-aware component adapts to the localdensity of the data creating more or fewer connections dependingon the density of the regions. Spectrum uses the diffusion of tensorproduct graphs (TPG) to capture higher order information in thedata and highlight underlying patterns in the data [ 39]. The finalSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAclusters are plotted using the first two principal components, PC1and PC2. We did not use the eigen gap-statistic to determine thenumber of clusters as it was not essential for us to constrain thenumber of clusters nor were we against identifying small cohortsif the cohort had important patterns to investigate further. In ourwork, we were searching for anomalies or ‘interesting patterns’that could explain the underlying population heterogeneity. Theeigen gap heuristic works well if there are well-defined clustersbut not of much help when there are noisy or overlapping clusters,which is likely to be the case in this data.The variables in the input space of the spectral clustering algo-rithm were age, gender, monthly average opioid dose (MME), meanbaseline dose, count of drug-related events in the pre-taper and aftertapering initiation phases, the number of mental health events inthe pre-taper and after tapering initiation phases, benzodiazepinesco-prescription at baseline and at 30 days, 31 Elixhauser comor-bidity flags, and the change in dose across consecutive months for12 months. The number of drug-related and mental health eventswere identified for each patient before taper and after taper initi-ation as these were the adverse events of interest. We reviewedeach cluster to identify the prevalence of different adverse eventsas well as the number of deaths after taper initiation. We report thedistinguishing characteristics across the cluster subpopulations. Forcounterfactual inference, we identified the number and proportionof drug-related and mental health events in each cluster, and thencomputed the excess number of those events relative to the nullassumption of equal event risk across all clusters. The counterfac-tual calculation for each adverse event is given by - ExcessEvents =(NumEventsCluster )−(NumPatientsCluster ∗(TotalEventsTotalPatients)),where, for each adverse event, i.e., mortality, drug-related events ormental health events, ExcessEvents is the number of excess eventsin the cluster, NumEventsCluster is the number of observed eventswithin the cluster, NumPatientsCluster is the number of patients inthe cluster, TotalEvents is the total number of adverse events in theentire data and TotalPatients is the total number of patients in theanalysis.2 RESULTSAmong the 113,618 patients in the entire cohort 33,628 had one ormore phases of opioid dose tapering (29.5%) based on the taperingdefinition of≥15% reduction in average daily dose in 7-months offollow-up [ 1]. Fig. 1 shows the analytical pipeline and the resultantplot of the 10 clusters identified. We could not show all the tenclusters clearly in a 2-D plot. Since spectral clustering plots theclusters by collapsing them onto the first two principal components,the multi-dimensional aspect of the clusters is not visible. However,Fig. 1 shows that the clusters are not spherical and the data hasoutliers. Table 1 shows the characteristics of patients who tapered;the sample was 54% female and 92% had only one tapering periodavailable for analysis.Spectral clustering of 30,932 patients who underwent single ta-pers resulted in 10 clusters (groups of patients or subpopulations)with relatively similar baseline characteristics. All clusters hadpatients with high mean baseline doses of 140-237 MME/day. Ofparticular interest were the three large clusters and their baselinecharacteristics shown in Table 2. The other seven clusters’ charac-teristics are discussed below but not shown due to small cell sizepolicy. The three large clusters (1, 2, and 10) were very similar de-mographically, with mean ages of 58.7, 57.0, and 58.4 years, and 56%,53%, and 50% female composition, respectively. They were also sim-ilar on baseline co-prescribing of benzodiazepines (29%, 30%, and30%, respectively) and comorbid diagnoses during the baseline year,such as alcohol abuse and dependence (2%, 3%, and 2%, respectively),drug abuse and dependence (17%, 17%, and 15%, respectively), anddepression (32%, 31%, and 30%, respectively). Furthermore, theyhad similar medical experiences during their pre-taper period ofstable opioid dosing, with relatively few drug-related events (mean0.042, 0.053, and 0.043, respectively) and more mental health events(mean 3.81, 4.03, and 3.66, respectively).Fig. 2 compares the tapering trajectories across clusters. Eachtrajectory is plotted as the average monthly dose of the patients inthe cluster. The three largest clusters had markedly different opioiddose tapering trajectories and associated adverse events as shownin Table 3. The number of excess events represents the differencebetween the number of observed events and the number of eventsthat would have occurred if all the clusters had the same event rate.About 55% of patients were in cluster 1, characterised by very slowand steady tapering to a final dose about two-thirds of baseline,with low event rates and no reversal to pre-taper baseline dose.While clusters 2 and 10 looked quite similar in their baseline char-acteristics, they had very different taper trajectories. Cluster 2 wascharacterised by relatively rapid tapering to zero or very low doses,while cluster 10 was characterised by somewhat slower taperingfrom lower baseline doses to higher end doses. Both these clustershad slightly higher event rates than other clusters. Clusters 2 and10 also had more drug-related events than cluster 1 (mean 0.116and 0.128 versus 0.074), more mental health events (mean 0.089 and0.075 versus 0.058), and more deaths (mean 0.079 and 0.098 versus0.036) during the tapering year. However, compared to cluster 10,cluster 2 had higher baseline mean and median doses (192.3 and137.0 MME versus 140.3 and 104.0 MME), and a lower mean enddose (12.9 versus 37.6 MME). The slow trajectory for cluster 1, andthe very low or zero doses in clusters 2 and 10, continued intothe 15th month, although those months were not included in thespectral clustering analyses.The characteristics of the taper trajectories for all the clusters aredetailed in Table 4. The left panel in Fig. 3 shows the proportion ofpatients with 0 MME dose of opioids across the three clusters eachmonth, while the right panel shows the taper trajectory. Table 5shows the relative change in the proportion of patients who wereprescribed 0 MME opioids at each time point in the three clusters.Cluster 2 had the highest proportion of patients (73%) who werecompletely tapered off opioids at the end of 12 months, comparedto cluster 10 (66%) and cluster 1 (2%). Since cluster 1 demonstratedthe safest outcomes, we compared clusters 2 and 10 to cluster 1.The graph in the left panel in Fig. 3 shows that cluster 2 had a steepyet steady upward trend in the proportion of patients who weretaken off opioids, whereas patients in cluster 1 almost uniformlystayed on opioids, and cluster 10 demonstrated a pattern of delayeddiscontinuation.The remaining 1.3% of patients sorted into seven smaller clusters,all of which had patients who were tapered to or close to 0 MMEepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 1: Analysis FlowchartTable 1: Characteristics of the patients who taperedVariables Categories nGender Female 18,197Male 15,431Age Mean±Std. 58.0±11.6Number of Tapers 1 30,9322 2,462>=3 234Number of drug-related events before tapering 0 32,2381 1,182>=2 208Number of drug-related events after tapering 0 31,2101 1,8882 356>=3 174Number of mental health events before tapering 0 14,7881 3,9842 2,9493 2,0404 1,6655 1,2236 1,034>=7 5,945Number of mental health events after tapering 0 32,0411 1,0962 300>=3 191Spectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 2: Characteristics of Clusters 1, 2 and 10 in the pre-taper periodCluster No. patients Age Female benzodiazepines Alcohol Depression Drug Drug-related Mental Health Base dose(Mean) (% ) Rx (%) abuse (% ) (% ) abuse (% ) event event (Mean MME)counts (Mean) counts(Mean)1 16,965 58.74 55.7 28.9 2.4 31.7 16.6 0.04 3.81 189.822 13,025 56.96 53.1 30.1 3.0 31.4 16.5 0.05 4.03 192.3110 531 58.36 49.5 29.7 3.4 30.3 15.1 0.04 3.66 140.33Table 3: Adverse events after taper initiation in clusters 1, 2 and 10Cluster No. patients Drug-related No. Excess drug- Mental Health No. Excess Mental Deaths/1000 No. Excess(%) events/1000 related events events/1000 Health events Deaths1 16,965 (55%) 74.0 -320.2 58.4 -240.2 36.1 -329.82 13,025 (42%) 116.2 303.6 89.4 220.5 79.1 306.210 531 (< 2%) 128.1 18.7 75.3 1.5 97.9 22.5Table 4: Average monthly dose for 12 months from taper initiation - Taper TrajectoriesCluster BaseDose Mon1 Mon2 Mon3 Mon4 Mon5 Mon6 Mon7 Mon8 Mon9 Mon10 Mon11 Mon12 Taper Trajectory1 189.82 174.53 170.27 165.64 161.23 157.28 154.15 155.05 155.53 155.25 154.05 151.68 144.01 Very slow, no reversal2 192.31 175.19 157.04 139.42 119.01 96.06 75.19 59.71 45.49 33.53 23.35 15.18 12.90 Rapid, no reversal3 236.81 213.18 121.69 1.38 193.46 204.26 206.02 191.60 163.58 150.98 141.49 129.90 114.59 Very Rapid, complete reversal4 192.57 179.16 0.44 185.31 194.26 194.64 176.29 167.38 160.98 150.52 143.25 134.76 133.31 Very Rapid, complete reversal5 196.99 183.05 147.09 92.71 0.33 172.22 176.60 158.29 145.41 139.10 135.23 119.75 113.12 Very Rapid, complete reversal6 212.81 205.10 182.34 153.96 106.37 77.02 5.26 0.00 168.49 169.27 152.98 120.84 115.09 Very Rapid, complete reversal7 227.55 217.24 171.99 152.88 122.05 101.76 57.73 31.72 22.56 0.00 148.42 147.73 135.03 Rapid, partial reversal8 217.07 205.71 177.62 161.43 145.93 102.60 78.04 64.87 51.06 33.13 0.00 157.58 166.52 Rapid, partial reversal9 220.37 203.30 160.72 117.39 85.31 63.20 59.18 48.60 36.30 29.20 18.94 0.00 143.26 Rapid, partial reversal10 140.33 124.30 114.04 111.72 109.34 101.91 92.57 85.40 80.46 100.04 101.61 81.17 37.57 Erratic, no reversalFigure 2: The average monthly dose in MME for all the patients within each cluster.(not shown due to small cell size policy). In clusters 3, 4, and 5, dosetapering to near zero occurred very rapidly within 4 months afterinitiation, but the pre-taper dose was quickly restored and slowtapering was initiated instead. On the other hand, in clusters 6, 7, 8,and 9, rapid tapering occurred over a longer period of 6-11 months,but the taper was largely reversed and the subsequent trajectorywas truncated due to the cohort design. Drug-related event ratesand mental health event rates were quite variable across these smallclusters (data not shown), but in aggregate, the mental health eventrate of patients in these seven clusters was over twice that of cluster1 (mean 0.117 versus 0.058).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 3: The proportion of patients without opioids, i.e., with an average monthly dose of 0 MME, in the three clusters ofinterest and their corresponding tapering trajectories.Table 5: Relative change in the proportion of patients who were prescribed 0 MME opioids by monthMonth C1 Prop. C1 Relative C2 Prop. C2 Relative Diff.Relative C10 Prop. C10 Relative Diff. RelativePatients change Patients change changes C1 - C2 Patients change changes C1 - C102nd 0.007 0.058 0.0243rd 0.010 0.046 0.112 0.95 -0.49 0.038 0.54 -0.084th 0.013 -0.99 0.187 0.66 -1.65 0.056 0.50 -1.495th 0.015 0.13 0.287 0.54 -0.41 0.090 0.60 -0.476th 0.016 -0.98 0.378 0.32 -1.30 0.109 0.21 -1.197th 0.009 -0.46 0.454 0.20 -0.66 0.154 0.41 -0.878th 0.010 -0.99 0.530 0.17 -1.16 0.196 0.27 -1.269th 0.008 -0.21 0.597 0.13 -0.34 0.102 -0.48 0.2710th 0.008 -0.99 0.659 0.10 -1.10 0.098 -0.04 -0.9511th 0.007 -0.15 0.707 0.07 -0.22 0.358 2.65 -2.8012th 0.024 -0.98 0.733 0.04 -1.01 0.663 0.85 -1.83Relative change refers to the difference in the proportion of patients within the cluster between the current and the previous month.Negative value indicates that fewer patients were prescribed 0 MME opioid in the current month compared to the previous month. C1-Cluster 1; C2- Cluster 2; C10- Cluster 10.3 DISCUSSIONIn this large longitudinal cohort of patients with chronic pain receiv-ing high dose opioids at stable dosing for at least one year, spectralclustering analysis suggested wide variability in dose tapering pat-terns over the first year of tapering. These trajectories show notablevariation in the velocity and duration of tapering, post-taperingminimum doses and subsequent re-initiation (taper reversal) ofmoderate-to-high opioid doses, which was an unexpected finding.While the specific number of clusters is not important, the cohortsidentified were interesting and are discussed here. The largest clus-ter (cluster 1 with 55% of patients) was characterised by very slow,gradual tapering from a mean baseline dose of 190 MME to 144MME at 12 months, whereas the second largest cluster (cluster 2with 42% of patients) was characterised by quicker and steep taper-ing from a mean baseline dose of 192 MME to only 12.9 MME (with73% of patients discontinued). The latter cluster, unlike other clus-ters, had a substantial excess of both drug-related and mental healthevents after the initiation of tapering, suggesting that tapering pa-tients accustomed to high-dose prescription opioids to zero maybe associated with important health risks. Our results suggest thatthere is a significant subpopulation of patients receiving high-doseopioids for chronic pain who may not tolerate tapering to very lowdoses. Many of these patients may have had opioid use disorders;previous research in the OLDW has shown that such patients havebetter outcomes if treated with buprenorphine or methadone [ 45].There wasn’t any strong rationale to specify the number of clus-ters as we were looking for ‘interesting patterns’ which could seemSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAlike outliers compared to the rest of the data. Notably, spectral clus-tering identified previously unsuspected and unusual patterns inthe opioid dose management data. In particular, two small clusterswere characterised by rapid tapering to negligible or zero doses,followed by re-initiation of prescription opioids at moderately highdoses. These patterns merit further exploration as they stronglysuggest that reversal of tapering may be a marker of an unsuccess-ful tapering strategy and that clinicians can safely resume prioropioid doses for some of these patients. These patients with unsuc-cessful tapers need to be separated and studied alongside the groupof successful tapers rather than be combined as was done whenthis cohort was selected for analysis (See Data Cohort and AdverseEvents section). This suggests that the definition of a tapered cohortneeds to be re-visited and taper reversals be counted as an adverseevent. Our findings highlight the importance of considering the ve-locity of tapering, as suggested by Agnoli and colleagues’ research,along with the taper duration and post-tapering final dose as clin-icians attempt to devise safer dose tapering strategies to addressthe current opioid overdose epidemic in the US. Unsupervised datamining methods are powerful tools when the aim is to understandthe data better and see what may have been previously missed inhypothesis-driven studies. Lastly, unsupervised knowledge discov-ery research helps in extracting novel, unsuspected phenomenathat can be investigated using supervised methods. These methodsmay also challenge what was previously thought to be true; for ex-ample, by identifying previously unrecognised patterns of taperingreversal shown in Fig. 2.During the writing of this manuscript, another report was pub-lished that analysed trajectories in patients receiving long-termopioid therapy using based trajectory modeling (GBTM) [ 5]. Bin-swanger’s analysis identified five trajectories. From the clinicalperspective, this is interesting but is an oversimplification as itputs all tapering patients into two groups – one slightly decreas-ing (which they reassigned to the stable group) and one decreasing(which they compared with the stable group) but they did not clearlyidentify taper reversals, suggesting that all tapers are maintainedover time. We selected our cohort based on whether they taperedat some point but did not filter to select those with decreasing tra-jectories based on different velocities. Hence, it is quite plausibleto expect multiple groups. In addition to being fully exploratory,with no assumptions on what kind of trajectories to expect, ouranalysis focused on patients for whom a taper was pre-determinedto understand the different types and speeds of tapering. Therefore,our results support and facilitate future analyses comparing the out-comes of these different tapering approaches with the alternative ofnot tapering at all (a control group of non-tapers), which is a viableapproach but was not represented in our sample. Other notabledifference from Binswanger’s work is that we did not assume anydata properties such as distributions, number of anticipated clusters,etc. to run spectral clustering and our dataset is many times largerand representative of the entire population in the US. As we weresearching for subtle differences in a population that consists oftapering patients, in order to receive an amplified signal, we need alarge cohort and use methods that do not impose any assumptionson the input data or the results. This is exactly what knowledgediscovery is, i.e., where the scholar keeps an open mind about thekind of patterns/information that will emerge. Unlike Binswanger’sreport, we did not impose any restriction on the spectral cluster-ing algorithm. It was during the analysis of clusters to understandwhy the patients segregated as such, did we notice that the patternof the trajectories were the point of subtle difference and discussedthis in detail. This is work in progress as we will need to furtheranalyse these patterns using parametric methods and also studyother potential outcomes of such tapering patterns. For the purposeof knowledge discovery with no apriori information, we preferredan assumption-free approach with no apriori information beingimposed in any phase of the analysis. Furthermore, as we did nothave any prior knowledge of the underlying distribution patternsin this cohort, GBTM could have led us to incorrect results [ 28].GBTM relies heavily on prior information which, in essence, is adifferent approach than the one here which was to identify pat-terns that automatically emerge and would correlate with nuanceddifferences in an already tapering population.We acknowledge some limitations in our analyses such as un-known intent of the prescribing provider. For example, the physi-cian’s choice of a rapid or slow taper may be driven by unobservedcharacteristics of patients or their medical histories, which mayindependently contribute to the resulting outcomes. We were alsounable to distinguish patient-supported tapering from physician-demanded tapering and what may have triggered taper reversals.Finally, the current data do not capture illicit opioid use, sharingof opioids prescribed for other patients, or methadone adminis-tered in certified treatment programmes. Nevertheless, our studyis relevant to the research and clinical communities grapplingwith the opioid crisis. There is substantial interest in understand-ing factors contributing to the current epidemic of opioid-relatedoverdose deaths [ 15], reflected in several recent economic analy-ses on physician prescribing patterns and opioid abuse [ 18,22],statewide surveys and reports on prescribing practices and patientoutcomes [ 14,27,34], and studies of physician prescribing patternsand outcomes [ 19,36]. Previous studies of opioid dose tapering ei-ther used smaller, less nationally representative cohorts or relied onsupervised analytic methods, where an outcome is always defined,to identify patient characteristics that are associated with adverseoutcomes.4 CONCLUSIONOur objective was knowledge discovery, which was to identify hid-den, unsuspected patterns in claims data for patients with chronicpain. Since our analysis was performed using a large dataset that isrepresentative of the population of the United States these resultsare generalisable. The insights from this work will be used to extendthis work and guide predictive analysis. Our study also highlightsthe need for more detailed investigations to identify what patientfactors should be considered while suggesting a dose tapering regi-men. Dose tapering to discontinuation may plausibly increase therisk of subsequent opioid overdose if these opioid-dependent pa-tients seek alternative opioids from illicit sources or mix opioidswith other sedating drugs such as benzodiazepines, thereby negat-ing the purpose of dose tapering. We find these results, obtainedusing a data driven approach, to be compelling enough to warrantfurther investigations into dose tapering patterns to inform futurenational prescribing policies and clinical practice.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoACKNOWLEDGMENTSThe authors extend their sincere gratitude to Guibo Xing, ElizabethMagnan, Alicia Agnoli and Daniel Tancredi for data sharing aswell as members of the OptumLabs OLDW team for their valuableguidance.
9VUhOECGKJt
Accept
4: Good paper, accept
In this study, the authors applied spectral clustering to identify high-risk opioid tapering trajectories associated with adverse outcomes using a large-scale dataset. The study addressed an important public health issue and discovered patterns of opioid tapering using unsupervised learning. The findings can support further studies on dose tapering patterns to inform future prescribing policies and clinical practice. A couple of areas can be improved in the current manuscript. 1. More details should be provided on creating the similarity matrix and the Laplacian matrix. A few references were mentioned but without technical details. 2. Variables in this study have different units and scales. How did the author deal with the different scales? Did it matter if those variables were standardized? 3. Certain patients may use dose tapering strategies based on their health conditions. For instance, slower tapering may be used because the doctor believes the patient may have a higher risk of adverse outcomes under rapid tapering. In future studies, it would be better to control those factors to disentangle the impact of tapering patterns. A discussion on this point is warranted.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
N0qlvDjnEv
KDD.org/2023/Workshop/epiDAMIK
2023
Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation
["Dinh Song An Nguyen", "Marie-Laure Charpignon", "Kathryn L Schaber", "Maimuna S. Majumder", "Andrew Perrault"]
Throughout an infectious disease crisis, resources that can be used to slow and prevent spread are often scarce or expensive. Designing control policies to optimally allocate these resources to maximize objectives is challenging. Here, we study the case of ring vaccination, a strategy that is used to control the spread of infection by vaccinating the contacts of identified infected individuals and their contacts of contacts. Using agent-based modeling to simulate an Ebola outbreak, we introduce a risk-based ring vaccination strategy in which individuals in a ring are prioritized based on their relative infection risks. Assuming the risk of transmission by contact type is known and a fixed supply of vaccine doses is available on each day, we compared this strategy to ring vaccination without prioritization and randomized vaccination. We find that risk-based ring vaccination offers a substantial advantage over standard ring vaccination when the number of doses are limited, including reducing the daily infected count and death count, and shifting the pandemic peak by a considerable amount of time. We believe that control policies based on estimated risk can often offer significant benefits without increasing the burden of administering the policy by an unacceptable amount.
["agent-based modeling", "ring vaccination", "Ebola", "public health"]
Risk-Based Ring Vaccination: A Strategy for PandemicControl and Vaccine AllocationDinh Song An NguyenThe Ohio State UniversityColumbus, Ohio, [email protected] CharpignonMITCambridge, Massachusetts, [email protected] L SchaberBoston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Shahnaz Majumder∗Boston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Perrault∗The Ohio State UniversityColumbus, Ohio, [email protected] an infectious disease crisis, resources that canbe used to slow and prevent spread are often scarce or expen-sive. Designing control policies to optimally allocate theseresources to maximize objectives is challenging. Here, westudy the case of ring vaccination, a strategy that is used tocontrol the spread of infection by vaccinating the contacts ofidentified infected individuals and their contacts of contacts.Using agent-based modeling to simulate an Ebola outbreak,we introduce a risk-based ring vaccination strategy in whichindividuals in a ring are prioritized based on their relativeinfection risks. Assuming the risk of transmission by con-tact type is known and a fixed supply of vaccine doses isavailable on each day, we compared this strategy to ring vac-cination without prioritization and randomized vaccination.We find that risk-based ring vaccination offers a substantialadvantage over standard ring vaccination when the numberof doses are limited, including reducing the daily infectedcount and death count, and shifting the pandemic peak by aconsiderable amount of time. We believe that control policiesbased on estimated risk can often offer significant benefitswithout increasing the burden of administering the policyby an unacceptable amount.Keywords: agent-based modeling, ring vaccination, Ebola,public health∗These authors co-supervised this research.Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).ACM Reference Format:Dinh Song An Nguyen, Marie Charpignon, Kathryn L Schaber,Maimuna Shahnaz Majumder, and Andrew Perrault. 2023. Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vac-cine Allocation. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 6 pages.1 IntroductionDesigning control policies for infectious disease outbreakscan be challenging for several reasons, including scientificuncertainty surrounding newly emerging diseases, manyobjectives that can be in tension with each other, and limitedaccess to labor and other critical resources. In this paper,we consider the case of ring vaccination , a vaccination deliv-ery strategy that is employed when the supply of vaccinesand the labor required to administer them is limited. Ringvaccination vaccinates individuals within a ring, contactsand contacts of contacts of an infected case. Given a vaccinewith appropriate properties, especially the ability to safelyinoculate an individual who has been recently exposed, ringvaccination can be highly effective. It has been used as a keytool in several Ebola and smallpox outbreaks [2, 6, 7].Ring vaccination functions by targeting individuals whowould be at a higher level of risk of developing the infec-tion, relative to the general population. For example, in the(early/late) stages of Ebola outbreak of Gulu district, Ugandain 2000, the attack rate across the population was roughly0.126% [12]. However, the secondary attack rate (SAR), de-fined as the probability that an infection occurs among sus-ceptible people within a specific set of contacts, can betterreflect the relation between social interactions and transmis-sion risk [ 10]. Yang et al . [15] estimate its value at 2.5%—thus,a vaccine administered immediately after exposure wouldbe about 20 times more effective compared to a randomlydelivered vaccination.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew PerraultHowever, not all individuals in a ring have the same in-fection risk. For instance, contacts of contacts are less likely,on average, to become infected because transmission mustoccur twice. Many observable and unobservable factors maycontribute to this risk, including the type and duration ofcontact between individuals, biological differences that makesome people more effective transmitters, multiple exposurepaths, and behavioral differences that are caused by the pres-ence or absence of public health monitoring (i.e., immediateself isolation at symptom onset).Like other control policies that target individuals withelevated risk such as contact tracing, ring vaccination facesa fundamental challenge that the number of such individu-als is roughly linear in the number of infected individuals,which varies by orders of magnitude throughout a crisis,but the amount of supplies and labor available per day isroughly fixed. We argue that control policies can leverageestimated risk to prioritize vaccine dose allocation, yieldingbetter performance when supplies are scarce. To that end, wepropose a risk-based ring vaccination strategy that leveragesthe differing risks associated with different contact types,information that can be easily elicited as part of contacttracing.We evaluate the risk-based ring strategy in an agent-basedmodel (ABM) and consider Ebola as the case study becauseof its unique transmission intensity bases on type of contact.We show that, when doses are highly restricted, risk-basedring vaccination yields significant benefits over standardring vaccination and randomized vaccination by not onlyreducing overall transmissions and deaths but also shiftingthe pandemic peak. We find that the extra risk associatedwith ring membership is quickly diluted as there are manymore contacts of contacts than contacts, and most contactshave little transmission chance associated with them.2 Agent-based modelWe develop an ABM for Ebola Virus Disease (EVD) withN=14652 agents (Table 1). We model two agent characteris-tics that influence spread and mortality: age and householdmembership. We replicate the household structure and agedistributions from Dodd et al . [5], who collected data in Zam-bia and South Africa in 2005-2006, and again in Zambia in2011. Each agent is in one of the six following discrete stateson each day: Susceptible (S), Incubating(IC), Infectious(I),Vaccinated but not yet immune (V), Deceased(D), and Re-moved (immune or recovered) (R). StateScomprises agentswho have not yet received a vaccine or become immune.StateIcomprises agents who are capable of transmittingEVD to their contacts who are currently in S. At the endof their infectious period, agents in state Itransition intostateDor stateR, depending on Pr(D|age). We estimate theage-specific probability of death using previously reportedcase fatality rates (CFR) of EVD for different age groups [ 14].Contacts are sampled daily. We sample household andnon-household contacts separately. We assume that contactsbetween each pair of individuals within a household occursevery day. Non-household contacts are sampled from thepopulation according to the inter-household contact matrixfrom Ozella et al . [13] , collected in a village in rural Malawi,accounting for the age of the person. We assume that thenumber of contacts follows an independent Poisson distri-bution for each age-age contact pair.Each contact has an associated exposure type. For house-hold contacts, we use and sample the exposure types andtheir distributions observed by Bower et al . [1], which in-clude handling fluids, direct and indirect wet and dry con-tacts, and minimal to no contact. Direct contact refers tosituation in which individuals come into direct contact, suchas touching and caring for a patient diagnosed with EVD,whereas an indirect contact refers to situations such as wash-ing clothes or sharing the same bed with an EVD positivepatient. In addition, wet contact refers to contact with anEVD patient that is symptomatic (e.g. vomiting, bleeding,etc.) while dry contact refers to contact with patients with-out any symptoms. Each type of contact associates with adifferent risk level. For example, a direct contact with fluidsis associated with a higher risk of transmission than a dry,physical contact. We let Wx,y,t represent the risk ratio ofthe contact between agents xandy. For household contacts,it is the age-adjusted risk ratio from Bower et al . [1]. Fornon-household contacts, we assign the same type to each,with a risk ratio we set to match with the non-householdSAR reported in Dixon et al . [4] (see Inferred parameters).Wx,y,t=0if no contact occurred.We define the probability of transmission from agent xtoagentyon daytasPr(base)·Wx,y,twherePr(base)is an inferred baseline probability of infec-tion. The process for inferring this parameter is described inthe next section.Vaccination. The 2017 Guinea ring vaccination trial demon-strates that the vaccine we considered in our simulations(rVSV-ZEBOV) is safe to administer to individuals who areincubating, but do not yet show symptoms [ 6]. Moreover,rVSV-ZEBOV has 100% effectiveness if administered afterexposure. Therefore, we assume that agents in state ICandSare eligible for vaccination. After vaccination, they transi-tion to state V, and nine days later, they transition to stateR, where agents are considered immune.Inferred parameters. We need to infer the parametersPr(base)andRR(non-household), the non-household riskratio, from data. Pr(base)can be interpreted as the probabil-ity of transmission for a household contact of the minimalcontact type. We set this value in order to match the sec-ondary attack rate (SAR) of the ABM to the SAR that wasRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1. Parameters for the ABM.Parameters Values ReferencesEbola dynamicsIncubation period Lognormal: μ=2.446days,σ=0.284 Legrand et al. [9]Infectious period Lognormal: μ=2.2915 days,σ=0.1332 Legrand et al. [9]Case fatality rate Ages < 15: 77.8% Qin et al. [14]Ages 15 - 59: 85.87%Ages > 59: 95.7%Time from vaccination to immunity 9days Kucharski et al. [8]Household secondary attack rate 12.3% Dixon et al. [4]Non-household secondary attack rate 4.8% Dixon et al. [4]Non-household contact matrix Adults-Children: Poisson, λ=1.2 Ozella et al. [13]Adults-Adolescents: Poisson, λ=1.5Adults-Adults: Poisson, λ=5.3Adolescents-Children: Poisson, λ=2.0Adolescents-Adolescents: Poisson, λ=3.6Children-Children: Poisson, λ=0.2Inferred model parametersBase probability of transmission 0.01962 Inferred from Bower et al. [1]Contact type distribution (household) Handled fluids: 16.3%,RR: 9.7 Bower et al. [1]and risk ratios (RR) Direct wet contact: 40.3%,RR: 8.3Direct dry contact: 17%,RR: 5.6Indirect wet contact: 2.6%,RR: 4.9Indirect dry contact: 10%,RR: 1.3Minimal contact: 13.8%,RR: 1Risk ratio for non-household 2.45 Inferred from Equation 2previously reported for Ebola. Specifically, we solve the fol-lowing equation for Pr(base)SARhh=Pr(base)∑︁iPr(i|household contact)RR(i),(1)wherePr(i)is the probability of a contact having type i,RR(i)is the risk ratio associated with contact type i. Thisresults inPr(base)=0.01962 . WithPr(base)identified, wecan solve for RR(non-household):SAR non-hh=Pr(base)RR(non-household), (2)resulting in RR(non-household)=2.45, an intensity be-tween indirect wet and indirect dry contact.3 Risk-based ring vaccinationIn the risk-based ring vaccination strategy, we prioritizethe limited vaccine doses to agents within a ring with thehighest estimated risks. The estimation strategy for risksneeds to be simple and only use information that is easy toobserve. Specifically, we propose estimating risks based oncontact type and household membership and doing so onlywithin a ring—thus, there are at most two contact eventsthat contribute to any estimated risk. We assume that risksare estimated separately for each ring and that there is nocoordination between rings. Risks are updated for each indi-vidual at most once—we update them for contacts of contactsif the contact becomes infected.We define a ring as the contacts and contacts of contacts ofthe infected agent. Let xdenote the seed case for the ring, ydenote a contact of x, andzdenote a contact of y. We definethe risk foryasR(y)=Pr(base)·Wx,y, (3)whereWx,yis the risk ratio associated with the highest inten-sity contact between xandyafterxdeveloped symptoms,i.e.,maxtWx,y,t withtinx’s infectious period. For z, wedefine the risk asR(z|yis not infected)=Pr(base)·Wx,y·Pr(base)·Wy,z(4)R(z|yis infected)=Pr(base)·Wy,z, (5)using equation 4 if yis not known to be infected and updatingto use equation 5 if ybecomes infected.Individuals in the ring are then vaccinated in order of theirrisk ranking, i.e., each day the Uunvaccinated individualswho do not have symptoms with highest risk are vaccinated.If there are still some vaccines left after everyone in the ringhas been vaccinated, which can happen when individuals areepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perraultunreachable during the vaccination process or in the laterstage of the outbreak, then the remaining vaccines will berandomly distributed to the susceptible agents that are notin the identified clusters.4 Preliminary resultsWe compare the risk-based ring vaccination approach tothree baselines: random vaccination, full ring vaccination,and no prioritization ring vaccination. All baselines vacci-nate only individuals that have no symptoms and are un-vaccinated (i.e., individuals in states SandIC). In randomvaccination ,Uindividuals are vaccinated at random eachday. In no prioritization ring ,Uindividuals that are in a ringare vaccinated and any leftover vaccines are randomly dis-tributed. In full ring ,allindividuals in a ring are vaccinated,relaxing the constraint of Uvaccines per day. In all cases,each individual has a 30% to be unreachable (as in [ 8]). Thedose that would go to that individual instead goes to thenext eligible agent (i.e., the next highest risk in risk-basedor another agent in the ring in no prioritization ring). Wesimulate the ABM with 10 seed cases selected uniformly atrandom from the population.By ranking individuals who are at most at risk, risk-basedring vaccination substantially reduces the infected numberof infections and deaths (Fig. 1 and Tab. 2). However, theimpact of risk-based prioritization varies significantly acrossdose limits. In all dose limits, we see a statistically significantdifference between risk-based prioritization and standardring vaccination. This difference is most salient for moderatedose limits—for 100 daily doses, risk-based reduces deathsby roughly 2times that of randomized vaccination and 1.8times for no prioritization ring. With 200 doses available,both risk-based and no-prioritization ring differ substantiallyfrom randomized vaccination, whereas in 50 and 100 doses,no prioritization ring and random achieve relatively similarperformance. In the case of 50 daily doses, risk-based ring hasa smaller impact on the number of infections and deaths ( <9%relative to random). However, we see substantial shiftingof the infection curve in this setting, delaying the peak byabout 20 days.The full ring strategy (without dose limit) results in fewdeaths as the vaccine for EVD is highly effective even whenadministered after exposure, even when 30% of contacts areunreachable at the time of vaccination. However, the costof this performance is the need for a surge of vaccination inthe first month of 321±179doses per day. This approachachieves control early resulting in an average of 111±152daily doses across the whole period.5 Discussion and Future WorkCreating control policies during an outbreak is challengingdue to resource constraints such as limited healthcare per-sonnel and medical supplies. Using an ABM, we study theimpact of ring vaccination strategies under a daily dose limit,and consider EVD as the case study, specifically. We find that,even with vaccination-infection combination that is highlysuited to ring vaccination, ring vaccination has limited im-pact on new infections relative to random vaccination untilthe number of doses available is sufficiently high. Moreover,the implementation of risk-based ring vaccination we con-sider only requires slightly more information (contact types),but has an impact even at much lower numbers of delivereddoses.It is expected to observe phase transitions in vaccinationprograms due to the exponential dynamics involved in in-fections: when the number of daily vaccine doses passes athreshold, infections will decay exponentially, and the out-break can be contained. However, this intuition does notapply directly to ring vaccination. Despite the ability of ringvaccination to identify individuals who have a higher riskof infection than the broader population, the impact on newinfections is relatively modest. A small modification of stan-dard ring vaccination—involving risk-based prioritizationamong documented contacts—induces dramatically differentbehavior. Specifically, for a small number of doses (Fig. 1), arisk-based approach yields a shift in the time at which thepeak in new infections is reached, thus postponing a surgemore efficiently than standard ring vaccination and random-ized vaccination. Moreover, above a certain threshold, lyingbetween 50 and 100 daily doses in our model, benefits of therisk-based approach compound and the shift in the timingof the peak is coupled with a significant reduction in themaximum number of new infections. These two distinct ef-fects and their potential coupling are not well understoodand merit further study.A key question is whether more sophisticated vaccinationstrategies such as ring vaccination are worth the additionaloverhead cost of reliably identifying and contact tracingcases. The answer to this question is multi-faceted and willdepend on the interplay among outbreak stage, vaccine avail-ability, and the combination of vaccination and infectionproperties. More effort is needed to understand these inter-actions: during an infectious disease emergency, resourcesare scarce and need to be allocated towards the geographicalareas or subpopulations that result in the highest impacts,i.e., the largest reduction in the maximum number of newinfections and the greatest delay in the timing of the peak.Our study has several limitations. Our current ABM doesnot incorporate realistic superspreading dynamics. Yet manyinfectious diseases demonstrate a high degree of transmis-sion heterogeneity, i.e., relatively few seed cases cause manysecondary infections [ 11]. While not well captured in ourmodel, this aspect has substantial consequences for ring vac-cination because the variance of the strategy’s outcome isincreased, i.e., a single missed secondary case can have aRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) 50 doses (b) 100 doses (c) 200 dosesFigure 1. The daily mean count ( ±standard deviation) of infected under different vaccination strategies. We simulate outbreakswith 10 seed cases for each policy given different numbers of vaccine availability. The shaded region indicates the standarddeviation for each vaccination strategy.Table 2. Mean (95% CI) count of deceased for each strategy and dose limit.Strategy 50 doses 100 doses 200 dosesRisk-based ring 8465.77 3268.67 175.77(8370.63–8560.91) (1399.83–5137.50) (144.14–207.4)No prioritization ring 9184 6091.50 784.7(9101.12–9266.88) (5915.62–6267.38) (663.08–906.32)Random 9272.33 6488.57 2044.4(9164.44.35–9380.22) (6425.06–6552.09) (1627.39–2461.41)Full ring 27.33(no dose limit) (10.79–43.87)No vaccination 12189.80(12156.43–12223.17)much larger impact on the timing of the peak in new in-fections and its magnitude than in the absence of transmis-sion heterogeneity. We suspect that accounting for super-spreading events would further reduce the benefits of ringvaccination. However, in some circumstances, pronouncedsuperspreading can make risk-based targeting more effectiveas observations from a given ring can be used to infer thetransmission potential of the seed case.Furthermore, it is already a hard task to gather contactsand contacts of contacts to form a ring for vaccination. Ob-taining information regarding exposure types between in-fected individuals and their contacts is even more time andresource intensive. Although risk-based ring vaccination ismore effective in our results, it is important to consider ad-ditional factors like timing and human resources in order tobetter evaluate the efficacy of our method.By design, ring vaccination targets individuals with ahigher number of contacts or more centrally located in anetwork. These individuals tend to get infected earlier thantheir counterparts with an average number of contacts andcentrality [ 3].Risk-based ring vaccination, by prioritizingindividuals with contacts at higher risk, will additionally tar-get individuals in larger households. This additional featureoperates independently from the “encirclement” aspect ofstandard ring vaccination; more work is needed to quantifytheir respective contributions (e.g., by comparing risk-basedvaccination to strategies that prioritize individuals based onhousehold size).AcknowledgmentsKS was supported in part by grant SES2200228 from theNational Science Foundation. MSM was supported in part bygrant R35GM146974 from the National Institute of GeneralMedical Sciences, National Institutes of Health. The fundershad no role in study design, data collection and analysis,decision to publish, or preparation of the manuscript.References[1]Hilary Bower, Sembia Johnson, Mohamed S Bangura, Alie JoshuaKamara, Osman Kamara, Saidu H Mansaray, Daniel Sesay, CeciliaTuray, Francesco Checchi, and Judith R Glynn. 2016. Exposure-specificand age-specific attack rates for Ebola virus disease in Ebola-affectedhouseholds, Sierra Leone. Emerging infectious diseases 22, 8 (2016),1403.[2]Ebola ça Suffit Ring Vaccination Trial Consortium. 2015. The ringvaccination trial: a novel cluster randomised controlled trial designto evaluate vaccine efficacy and effectiveness during outbreaks, withspecial reference to Ebola. BMJ: British Medical Journal 351 (2015),h3740.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perrault[3]Nicholas A Christakis and James H Fowler. 2010. Social networksensors for early detection of contagious outbreaks. PloS one 5, 9(2010), e12948.[4]Meredith G Dixon, Melanie M Taylor, Jacob Dee, Avi Hakim, PaulCantey, Travis Lim, Hawa Bah, Sékou Mohamed Camara, Clement BNdongmo, Mory Togba, et al .2015. Contact tracing activities duringthe Ebola virus disease epidemic in Kindia and Faranah, Guinea, 2014.Emerging infectious diseases 21, 11 (2015), 2022.[5]Peter J Dodd, Clare Looker, Ian D Plumb, Virginia Bond, Ab Schaap,Kwame Shanaube, Monde Muyoyeta, Emilia Vynnycky, Peter Godfrey-Faussett, Elizabeth L Corbett, et al .2016. Age-and sex-specific socialcontact patterns and incidence of Mycobacterium tuberculosis infec-tion. American journal of epidemiology 183, 2 (2016), 156–166.[6]Ana Maria Henao-Restrepo, Anton Camacho, Ira M Longini, Conall HWatson, W John Edmunds, Matthias Egger, Miles W Carroll, Natalie EDean, Ibrahima Diatta, Moussa Doumbia, et al .2017. Efficacy andeffectiveness of an rVSV-vectored vaccine in preventing Ebola virusdisease: final results from the Guinea ring vaccination, open-label,cluster-randomised trial (Ebola Ça Suffit!). The Lancet 389, 10068(2017), 505–518.[7]Mirjam Kretzschmar, Susan Van den Hof, Jacco Wallinga, and JanVan Wijngaarden. 2004. Ring vaccination and smallpox control. Emerg-ing infectious diseases 10, 5 (2004), 832.[8]Adam J Kucharski, Rosalind M Eggo, Conall H Watson, Anton Cama-cho, Sebastian Funk, and W John Edmunds. 2016. Effectiveness ofring vaccination as control strategy for Ebola virus disease. Emerginginfectious diseases 22, 1 (2016), 105.[9]Judith Legrand, Rebecca Freeman Grais, Pierre-Yves Boelle, Alain-Jacques Valleron, and Antoine Flahault. 2007. Understanding thedynamics of Ebola epidemics. Epidemiology & Infection 135, 4 (2007),610–621.[10] Yang Liu, Rosalind M Eggo, and Adam J Kucharski. 2020. Secondaryattack rate and superspreading events for SARS-CoV-2. The Lancet395, 10227 (2020), e47.[11] James O Lloyd-Smith, Sebastian J Schreiber, P Ekkehard Kopp, andWayne M Getz. 2005. Superspreading and the effect of individualvariation on disease emergence. Nature 438, 7066 (2005), 355–359.[12] SI Okware, FG Omaswa, S Zaramba, A Opio, JJ Lutwama, J Kamugisha,EB Rwaguma, P Kagwa, and M Lamunu. 2002. An outbreak of Ebolain Uganda. Tropical Medicine & International Health 7, 12 (2002), 1068–1075.[13] Laura Ozella, Daniela Paolotti, Guilherme Lichand, Jorge P Rodríguez,Simon Haenni, John Phuka, Onicio B Leal-Neto, and Ciro Cattuto.2021. Using wearable proximity sensors to characterize social contactpatterns in a village of rural Malawi. EPJ Data Science 10, 1 (2021), 46.[14] Enqiang Qin, Jingfeng Bi, Min Zhao, Ye Wang, Tongsheng Guo, TaoYan, Zhiwei Li, Juan Sun, Jieli Zhang, Suhong Chen, et al .2015. Clinicalfeatures of patients with Ebola virus disease in Sierra Leone. Clinicalinfectious diseases 61, 4 (2015), 491–495.[15] Yingrui Yang, Ashley McKhann, Sixing Chen, Guy Harling, and Jukka-Pekka Onnela. 2019. Efficient vaccination strategies for epidemiccontrol using network information. Epidemics 27 (2019), 115–122.
4H1jTVEwu_K
Review of risk-based ring vaccination
4: Good paper, accept
In this paper, the authors investigate a risk-based ring vaccination strategy. Ring vaccination is a vaccine allocation strategy that vaccinates the contacts and contacts-of-contacts of an infected case. Here, the authors use an agent-based model to simulate an Ebola outbreak and test a variant of ring vaccination that prioritizes individuals within the contact-of-contact network with the highest risk (with risks estimated from the model). They show through their simulations that risk-based ring vaccination is significantly more effective than ring vaccination without prioritization, especially when more doses (100 or 200) of the vaccine are available. Strengths + Risk-based ring vaccination is a nice idea and well-motivated + The authors clearly demonstrate the effectiveness of this strategy through simulations + The model is largely motivated by prior literature and uses parameters from prior work Weaknesses - The results feel almost like a foregone conclusion given the model, since they use risks from the model to decide which individuals to prioritize. It would be useful to establish, especially through mathematical analysis if possible, if we should be "surprised" by the results, or the settings that must hold true for risk-based to be significantly more effective. - A lot of design decisions are made within the model, eg, levels of contact and types of contact within households/across households, disease parameters, etc. While it helps that parameters were mostly set based on prior literature, it would be useful to conduct sensitivity analyses to see how model results vary based on the decisions made. - Unclear if authors were the first to do risk-based ring vaccination. Also, unclear how realistic this model is in real life, since their simulation uses the individual's "real" risk from the model to determine prioritization. In reality, it seems hard already to get an infected person's contacts and contacts-of-contacts; would be even harder to know levels of contact/risk between all these people.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
N0qlvDjnEv
KDD.org/2023/Workshop/epiDAMIK
2023
Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation
["Dinh Song An Nguyen", "Marie-Laure Charpignon", "Kathryn L Schaber", "Maimuna S. Majumder", "Andrew Perrault"]
Throughout an infectious disease crisis, resources that can be used to slow and prevent spread are often scarce or expensive. Designing control policies to optimally allocate these resources to maximize objectives is challenging. Here, we study the case of ring vaccination, a strategy that is used to control the spread of infection by vaccinating the contacts of identified infected individuals and their contacts of contacts. Using agent-based modeling to simulate an Ebola outbreak, we introduce a risk-based ring vaccination strategy in which individuals in a ring are prioritized based on their relative infection risks. Assuming the risk of transmission by contact type is known and a fixed supply of vaccine doses is available on each day, we compared this strategy to ring vaccination without prioritization and randomized vaccination. We find that risk-based ring vaccination offers a substantial advantage over standard ring vaccination when the number of doses are limited, including reducing the daily infected count and death count, and shifting the pandemic peak by a considerable amount of time. We believe that control policies based on estimated risk can often offer significant benefits without increasing the burden of administering the policy by an unacceptable amount.
["agent-based modeling", "ring vaccination", "Ebola", "public health"]
Risk-Based Ring Vaccination: A Strategy for PandemicControl and Vaccine AllocationDinh Song An NguyenThe Ohio State UniversityColumbus, Ohio, [email protected] CharpignonMITCambridge, Massachusetts, [email protected] L SchaberBoston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Shahnaz Majumder∗Boston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Perrault∗The Ohio State UniversityColumbus, Ohio, [email protected] an infectious disease crisis, resources that canbe used to slow and prevent spread are often scarce or expen-sive. Designing control policies to optimally allocate theseresources to maximize objectives is challenging. Here, westudy the case of ring vaccination, a strategy that is used tocontrol the spread of infection by vaccinating the contacts ofidentified infected individuals and their contacts of contacts.Using agent-based modeling to simulate an Ebola outbreak,we introduce a risk-based ring vaccination strategy in whichindividuals in a ring are prioritized based on their relativeinfection risks. Assuming the risk of transmission by con-tact type is known and a fixed supply of vaccine doses isavailable on each day, we compared this strategy to ring vac-cination without prioritization and randomized vaccination.We find that risk-based ring vaccination offers a substantialadvantage over standard ring vaccination when the numberof doses are limited, including reducing the daily infectedcount and death count, and shifting the pandemic peak by aconsiderable amount of time. We believe that control policiesbased on estimated risk can often offer significant benefitswithout increasing the burden of administering the policyby an unacceptable amount.Keywords: agent-based modeling, ring vaccination, Ebola,public health∗These authors co-supervised this research.Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).ACM Reference Format:Dinh Song An Nguyen, Marie Charpignon, Kathryn L Schaber,Maimuna Shahnaz Majumder, and Andrew Perrault. 2023. Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vac-cine Allocation. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 6 pages.1 IntroductionDesigning control policies for infectious disease outbreakscan be challenging for several reasons, including scientificuncertainty surrounding newly emerging diseases, manyobjectives that can be in tension with each other, and limitedaccess to labor and other critical resources. In this paper,we consider the case of ring vaccination , a vaccination deliv-ery strategy that is employed when the supply of vaccinesand the labor required to administer them is limited. Ringvaccination vaccinates individuals within a ring, contactsand contacts of contacts of an infected case. Given a vaccinewith appropriate properties, especially the ability to safelyinoculate an individual who has been recently exposed, ringvaccination can be highly effective. It has been used as a keytool in several Ebola and smallpox outbreaks [2, 6, 7].Ring vaccination functions by targeting individuals whowould be at a higher level of risk of developing the infec-tion, relative to the general population. For example, in the(early/late) stages of Ebola outbreak of Gulu district, Ugandain 2000, the attack rate across the population was roughly0.126% [12]. However, the secondary attack rate (SAR), de-fined as the probability that an infection occurs among sus-ceptible people within a specific set of contacts, can betterreflect the relation between social interactions and transmis-sion risk [ 10]. Yang et al . [15] estimate its value at 2.5%—thus,a vaccine administered immediately after exposure wouldbe about 20 times more effective compared to a randomlydelivered vaccination.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew PerraultHowever, not all individuals in a ring have the same in-fection risk. For instance, contacts of contacts are less likely,on average, to become infected because transmission mustoccur twice. Many observable and unobservable factors maycontribute to this risk, including the type and duration ofcontact between individuals, biological differences that makesome people more effective transmitters, multiple exposurepaths, and behavioral differences that are caused by the pres-ence or absence of public health monitoring (i.e., immediateself isolation at symptom onset).Like other control policies that target individuals withelevated risk such as contact tracing, ring vaccination facesa fundamental challenge that the number of such individu-als is roughly linear in the number of infected individuals,which varies by orders of magnitude throughout a crisis,but the amount of supplies and labor available per day isroughly fixed. We argue that control policies can leverageestimated risk to prioritize vaccine dose allocation, yieldingbetter performance when supplies are scarce. To that end, wepropose a risk-based ring vaccination strategy that leveragesthe differing risks associated with different contact types,information that can be easily elicited as part of contacttracing.We evaluate the risk-based ring strategy in an agent-basedmodel (ABM) and consider Ebola as the case study becauseof its unique transmission intensity bases on type of contact.We show that, when doses are highly restricted, risk-basedring vaccination yields significant benefits over standardring vaccination and randomized vaccination by not onlyreducing overall transmissions and deaths but also shiftingthe pandemic peak. We find that the extra risk associatedwith ring membership is quickly diluted as there are manymore contacts of contacts than contacts, and most contactshave little transmission chance associated with them.2 Agent-based modelWe develop an ABM for Ebola Virus Disease (EVD) withN=14652 agents (Table 1). We model two agent characteris-tics that influence spread and mortality: age and householdmembership. We replicate the household structure and agedistributions from Dodd et al . [5], who collected data in Zam-bia and South Africa in 2005-2006, and again in Zambia in2011. Each agent is in one of the six following discrete stateson each day: Susceptible (S), Incubating(IC), Infectious(I),Vaccinated but not yet immune (V), Deceased(D), and Re-moved (immune or recovered) (R). StateScomprises agentswho have not yet received a vaccine or become immune.StateIcomprises agents who are capable of transmittingEVD to their contacts who are currently in S. At the endof their infectious period, agents in state Itransition intostateDor stateR, depending on Pr(D|age). We estimate theage-specific probability of death using previously reportedcase fatality rates (CFR) of EVD for different age groups [ 14].Contacts are sampled daily. We sample household andnon-household contacts separately. We assume that contactsbetween each pair of individuals within a household occursevery day. Non-household contacts are sampled from thepopulation according to the inter-household contact matrixfrom Ozella et al . [13] , collected in a village in rural Malawi,accounting for the age of the person. We assume that thenumber of contacts follows an independent Poisson distri-bution for each age-age contact pair.Each contact has an associated exposure type. For house-hold contacts, we use and sample the exposure types andtheir distributions observed by Bower et al . [1], which in-clude handling fluids, direct and indirect wet and dry con-tacts, and minimal to no contact. Direct contact refers tosituation in which individuals come into direct contact, suchas touching and caring for a patient diagnosed with EVD,whereas an indirect contact refers to situations such as wash-ing clothes or sharing the same bed with an EVD positivepatient. In addition, wet contact refers to contact with anEVD patient that is symptomatic (e.g. vomiting, bleeding,etc.) while dry contact refers to contact with patients with-out any symptoms. Each type of contact associates with adifferent risk level. For example, a direct contact with fluidsis associated with a higher risk of transmission than a dry,physical contact. We let Wx,y,t represent the risk ratio ofthe contact between agents xandy. For household contacts,it is the age-adjusted risk ratio from Bower et al . [1]. Fornon-household contacts, we assign the same type to each,with a risk ratio we set to match with the non-householdSAR reported in Dixon et al . [4] (see Inferred parameters).Wx,y,t=0if no contact occurred.We define the probability of transmission from agent xtoagentyon daytasPr(base)·Wx,y,twherePr(base)is an inferred baseline probability of infec-tion. The process for inferring this parameter is described inthe next section.Vaccination. The 2017 Guinea ring vaccination trial demon-strates that the vaccine we considered in our simulations(rVSV-ZEBOV) is safe to administer to individuals who areincubating, but do not yet show symptoms [ 6]. Moreover,rVSV-ZEBOV has 100% effectiveness if administered afterexposure. Therefore, we assume that agents in state ICandSare eligible for vaccination. After vaccination, they transi-tion to state V, and nine days later, they transition to stateR, where agents are considered immune.Inferred parameters. We need to infer the parametersPr(base)andRR(non-household), the non-household riskratio, from data. Pr(base)can be interpreted as the probabil-ity of transmission for a household contact of the minimalcontact type. We set this value in order to match the sec-ondary attack rate (SAR) of the ABM to the SAR that wasRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1. Parameters for the ABM.Parameters Values ReferencesEbola dynamicsIncubation period Lognormal: μ=2.446days,σ=0.284 Legrand et al. [9]Infectious period Lognormal: μ=2.2915 days,σ=0.1332 Legrand et al. [9]Case fatality rate Ages < 15: 77.8% Qin et al. [14]Ages 15 - 59: 85.87%Ages > 59: 95.7%Time from vaccination to immunity 9days Kucharski et al. [8]Household secondary attack rate 12.3% Dixon et al. [4]Non-household secondary attack rate 4.8% Dixon et al. [4]Non-household contact matrix Adults-Children: Poisson, λ=1.2 Ozella et al. [13]Adults-Adolescents: Poisson, λ=1.5Adults-Adults: Poisson, λ=5.3Adolescents-Children: Poisson, λ=2.0Adolescents-Adolescents: Poisson, λ=3.6Children-Children: Poisson, λ=0.2Inferred model parametersBase probability of transmission 0.01962 Inferred from Bower et al. [1]Contact type distribution (household) Handled fluids: 16.3%,RR: 9.7 Bower et al. [1]and risk ratios (RR) Direct wet contact: 40.3%,RR: 8.3Direct dry contact: 17%,RR: 5.6Indirect wet contact: 2.6%,RR: 4.9Indirect dry contact: 10%,RR: 1.3Minimal contact: 13.8%,RR: 1Risk ratio for non-household 2.45 Inferred from Equation 2previously reported for Ebola. Specifically, we solve the fol-lowing equation for Pr(base)SARhh=Pr(base)∑︁iPr(i|household contact)RR(i),(1)wherePr(i)is the probability of a contact having type i,RR(i)is the risk ratio associated with contact type i. Thisresults inPr(base)=0.01962 . WithPr(base)identified, wecan solve for RR(non-household):SAR non-hh=Pr(base)RR(non-household), (2)resulting in RR(non-household)=2.45, an intensity be-tween indirect wet and indirect dry contact.3 Risk-based ring vaccinationIn the risk-based ring vaccination strategy, we prioritizethe limited vaccine doses to agents within a ring with thehighest estimated risks. The estimation strategy for risksneeds to be simple and only use information that is easy toobserve. Specifically, we propose estimating risks based oncontact type and household membership and doing so onlywithin a ring—thus, there are at most two contact eventsthat contribute to any estimated risk. We assume that risksare estimated separately for each ring and that there is nocoordination between rings. Risks are updated for each indi-vidual at most once—we update them for contacts of contactsif the contact becomes infected.We define a ring as the contacts and contacts of contacts ofthe infected agent. Let xdenote the seed case for the ring, ydenote a contact of x, andzdenote a contact of y. We definethe risk foryasR(y)=Pr(base)·Wx,y, (3)whereWx,yis the risk ratio associated with the highest inten-sity contact between xandyafterxdeveloped symptoms,i.e.,maxtWx,y,t withtinx’s infectious period. For z, wedefine the risk asR(z|yis not infected)=Pr(base)·Wx,y·Pr(base)·Wy,z(4)R(z|yis infected)=Pr(base)·Wy,z, (5)using equation 4 if yis not known to be infected and updatingto use equation 5 if ybecomes infected.Individuals in the ring are then vaccinated in order of theirrisk ranking, i.e., each day the Uunvaccinated individualswho do not have symptoms with highest risk are vaccinated.If there are still some vaccines left after everyone in the ringhas been vaccinated, which can happen when individuals areepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perraultunreachable during the vaccination process or in the laterstage of the outbreak, then the remaining vaccines will berandomly distributed to the susceptible agents that are notin the identified clusters.4 Preliminary resultsWe compare the risk-based ring vaccination approach tothree baselines: random vaccination, full ring vaccination,and no prioritization ring vaccination. All baselines vacci-nate only individuals that have no symptoms and are un-vaccinated (i.e., individuals in states SandIC). In randomvaccination ,Uindividuals are vaccinated at random eachday. In no prioritization ring ,Uindividuals that are in a ringare vaccinated and any leftover vaccines are randomly dis-tributed. In full ring ,allindividuals in a ring are vaccinated,relaxing the constraint of Uvaccines per day. In all cases,each individual has a 30% to be unreachable (as in [ 8]). Thedose that would go to that individual instead goes to thenext eligible agent (i.e., the next highest risk in risk-basedor another agent in the ring in no prioritization ring). Wesimulate the ABM with 10 seed cases selected uniformly atrandom from the population.By ranking individuals who are at most at risk, risk-basedring vaccination substantially reduces the infected numberof infections and deaths (Fig. 1 and Tab. 2). However, theimpact of risk-based prioritization varies significantly acrossdose limits. In all dose limits, we see a statistically significantdifference between risk-based prioritization and standardring vaccination. This difference is most salient for moderatedose limits—for 100 daily doses, risk-based reduces deathsby roughly 2times that of randomized vaccination and 1.8times for no prioritization ring. With 200 doses available,both risk-based and no-prioritization ring differ substantiallyfrom randomized vaccination, whereas in 50 and 100 doses,no prioritization ring and random achieve relatively similarperformance. In the case of 50 daily doses, risk-based ring hasa smaller impact on the number of infections and deaths ( <9%relative to random). However, we see substantial shiftingof the infection curve in this setting, delaying the peak byabout 20 days.The full ring strategy (without dose limit) results in fewdeaths as the vaccine for EVD is highly effective even whenadministered after exposure, even when 30% of contacts areunreachable at the time of vaccination. However, the costof this performance is the need for a surge of vaccination inthe first month of 321±179doses per day. This approachachieves control early resulting in an average of 111±152daily doses across the whole period.5 Discussion and Future WorkCreating control policies during an outbreak is challengingdue to resource constraints such as limited healthcare per-sonnel and medical supplies. Using an ABM, we study theimpact of ring vaccination strategies under a daily dose limit,and consider EVD as the case study, specifically. We find that,even with vaccination-infection combination that is highlysuited to ring vaccination, ring vaccination has limited im-pact on new infections relative to random vaccination untilthe number of doses available is sufficiently high. Moreover,the implementation of risk-based ring vaccination we con-sider only requires slightly more information (contact types),but has an impact even at much lower numbers of delivereddoses.It is expected to observe phase transitions in vaccinationprograms due to the exponential dynamics involved in in-fections: when the number of daily vaccine doses passes athreshold, infections will decay exponentially, and the out-break can be contained. However, this intuition does notapply directly to ring vaccination. Despite the ability of ringvaccination to identify individuals who have a higher riskof infection than the broader population, the impact on newinfections is relatively modest. A small modification of stan-dard ring vaccination—involving risk-based prioritizationamong documented contacts—induces dramatically differentbehavior. Specifically, for a small number of doses (Fig. 1), arisk-based approach yields a shift in the time at which thepeak in new infections is reached, thus postponing a surgemore efficiently than standard ring vaccination and random-ized vaccination. Moreover, above a certain threshold, lyingbetween 50 and 100 daily doses in our model, benefits of therisk-based approach compound and the shift in the timingof the peak is coupled with a significant reduction in themaximum number of new infections. These two distinct ef-fects and their potential coupling are not well understoodand merit further study.A key question is whether more sophisticated vaccinationstrategies such as ring vaccination are worth the additionaloverhead cost of reliably identifying and contact tracingcases. The answer to this question is multi-faceted and willdepend on the interplay among outbreak stage, vaccine avail-ability, and the combination of vaccination and infectionproperties. More effort is needed to understand these inter-actions: during an infectious disease emergency, resourcesare scarce and need to be allocated towards the geographicalareas or subpopulations that result in the highest impacts,i.e., the largest reduction in the maximum number of newinfections and the greatest delay in the timing of the peak.Our study has several limitations. Our current ABM doesnot incorporate realistic superspreading dynamics. Yet manyinfectious diseases demonstrate a high degree of transmis-sion heterogeneity, i.e., relatively few seed cases cause manysecondary infections [ 11]. While not well captured in ourmodel, this aspect has substantial consequences for ring vac-cination because the variance of the strategy’s outcome isincreased, i.e., a single missed secondary case can have aRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) 50 doses (b) 100 doses (c) 200 dosesFigure 1. The daily mean count ( ±standard deviation) of infected under different vaccination strategies. We simulate outbreakswith 10 seed cases for each policy given different numbers of vaccine availability. The shaded region indicates the standarddeviation for each vaccination strategy.Table 2. Mean (95% CI) count of deceased for each strategy and dose limit.Strategy 50 doses 100 doses 200 dosesRisk-based ring 8465.77 3268.67 175.77(8370.63–8560.91) (1399.83–5137.50) (144.14–207.4)No prioritization ring 9184 6091.50 784.7(9101.12–9266.88) (5915.62–6267.38) (663.08–906.32)Random 9272.33 6488.57 2044.4(9164.44.35–9380.22) (6425.06–6552.09) (1627.39–2461.41)Full ring 27.33(no dose limit) (10.79–43.87)No vaccination 12189.80(12156.43–12223.17)much larger impact on the timing of the peak in new in-fections and its magnitude than in the absence of transmis-sion heterogeneity. We suspect that accounting for super-spreading events would further reduce the benefits of ringvaccination. However, in some circumstances, pronouncedsuperspreading can make risk-based targeting more effectiveas observations from a given ring can be used to infer thetransmission potential of the seed case.Furthermore, it is already a hard task to gather contactsand contacts of contacts to form a ring for vaccination. Ob-taining information regarding exposure types between in-fected individuals and their contacts is even more time andresource intensive. Although risk-based ring vaccination ismore effective in our results, it is important to consider ad-ditional factors like timing and human resources in order tobetter evaluate the efficacy of our method.By design, ring vaccination targets individuals with ahigher number of contacts or more centrally located in anetwork. These individuals tend to get infected earlier thantheir counterparts with an average number of contacts andcentrality [ 3].Risk-based ring vaccination, by prioritizingindividuals with contacts at higher risk, will additionally tar-get individuals in larger households. This additional featureoperates independently from the “encirclement” aspect ofstandard ring vaccination; more work is needed to quantifytheir respective contributions (e.g., by comparing risk-basedvaccination to strategies that prioritize individuals based onhousehold size).AcknowledgmentsKS was supported in part by grant SES2200228 from theNational Science Foundation. MSM was supported in part bygrant R35GM146974 from the National Institute of GeneralMedical Sciences, National Institutes of Health. The fundershad no role in study design, data collection and analysis,decision to publish, or preparation of the manuscript.References[1]Hilary Bower, Sembia Johnson, Mohamed S Bangura, Alie JoshuaKamara, Osman Kamara, Saidu H Mansaray, Daniel Sesay, CeciliaTuray, Francesco Checchi, and Judith R Glynn. 2016. Exposure-specificand age-specific attack rates for Ebola virus disease in Ebola-affectedhouseholds, Sierra Leone. Emerging infectious diseases 22, 8 (2016),1403.[2]Ebola ça Suffit Ring Vaccination Trial Consortium. 2015. The ringvaccination trial: a novel cluster randomised controlled trial designto evaluate vaccine efficacy and effectiveness during outbreaks, withspecial reference to Ebola. BMJ: British Medical Journal 351 (2015),h3740.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perrault[3]Nicholas A Christakis and James H Fowler. 2010. Social networksensors for early detection of contagious outbreaks. PloS one 5, 9(2010), e12948.[4]Meredith G Dixon, Melanie M Taylor, Jacob Dee, Avi Hakim, PaulCantey, Travis Lim, Hawa Bah, Sékou Mohamed Camara, Clement BNdongmo, Mory Togba, et al .2015. Contact tracing activities duringthe Ebola virus disease epidemic in Kindia and Faranah, Guinea, 2014.Emerging infectious diseases 21, 11 (2015), 2022.[5]Peter J Dodd, Clare Looker, Ian D Plumb, Virginia Bond, Ab Schaap,Kwame Shanaube, Monde Muyoyeta, Emilia Vynnycky, Peter Godfrey-Faussett, Elizabeth L Corbett, et al .2016. Age-and sex-specific socialcontact patterns and incidence of Mycobacterium tuberculosis infec-tion. American journal of epidemiology 183, 2 (2016), 156–166.[6]Ana Maria Henao-Restrepo, Anton Camacho, Ira M Longini, Conall HWatson, W John Edmunds, Matthias Egger, Miles W Carroll, Natalie EDean, Ibrahima Diatta, Moussa Doumbia, et al .2017. Efficacy andeffectiveness of an rVSV-vectored vaccine in preventing Ebola virusdisease: final results from the Guinea ring vaccination, open-label,cluster-randomised trial (Ebola Ça Suffit!). The Lancet 389, 10068(2017), 505–518.[7]Mirjam Kretzschmar, Susan Van den Hof, Jacco Wallinga, and JanVan Wijngaarden. 2004. Ring vaccination and smallpox control. Emerg-ing infectious diseases 10, 5 (2004), 832.[8]Adam J Kucharski, Rosalind M Eggo, Conall H Watson, Anton Cama-cho, Sebastian Funk, and W John Edmunds. 2016. Effectiveness ofring vaccination as control strategy for Ebola virus disease. Emerginginfectious diseases 22, 1 (2016), 105.[9]Judith Legrand, Rebecca Freeman Grais, Pierre-Yves Boelle, Alain-Jacques Valleron, and Antoine Flahault. 2007. Understanding thedynamics of Ebola epidemics. Epidemiology & Infection 135, 4 (2007),610–621.[10] Yang Liu, Rosalind M Eggo, and Adam J Kucharski. 2020. Secondaryattack rate and superspreading events for SARS-CoV-2. The Lancet395, 10227 (2020), e47.[11] James O Lloyd-Smith, Sebastian J Schreiber, P Ekkehard Kopp, andWayne M Getz. 2005. Superspreading and the effect of individualvariation on disease emergence. Nature 438, 7066 (2005), 355–359.[12] SI Okware, FG Omaswa, S Zaramba, A Opio, JJ Lutwama, J Kamugisha,EB Rwaguma, P Kagwa, and M Lamunu. 2002. An outbreak of Ebolain Uganda. Tropical Medicine & International Health 7, 12 (2002), 1068–1075.[13] Laura Ozella, Daniela Paolotti, Guilherme Lichand, Jorge P Rodríguez,Simon Haenni, John Phuka, Onicio B Leal-Neto, and Ciro Cattuto.2021. Using wearable proximity sensors to characterize social contactpatterns in a village of rural Malawi. EPJ Data Science 10, 1 (2021), 46.[14] Enqiang Qin, Jingfeng Bi, Min Zhao, Ye Wang, Tongsheng Guo, TaoYan, Zhiwei Li, Juan Sun, Jieli Zhang, Suhong Chen, et al .2015. Clinicalfeatures of patients with Ebola virus disease in Sierra Leone. Clinicalinfectious diseases 61, 4 (2015), 491–495.[15] Yingrui Yang, Ashley McKhann, Sixing Chen, Guy Harling, and Jukka-Pekka Onnela. 2019. Efficient vaccination strategies for epidemiccontrol using network information. Epidemics 27 (2019), 115–122.
bnqE38fHCY3
Review of the paper on risk-based rink vaccination
5: Top 50% of accepted papers, clear accept
The paper is written and explained very well. The authors have employed agent-based simulation, incorporating six distinct states and separate sampling for household and non-household contacts. The authors have introduced risk based ring vaccination and showed that it is more effective compared to the random allocation and ring allocation. Furthermore, the authors have provided insightful suggestions for potential future research directions, all of which are highly intriguing and would greatly enhance the existing work. The assumptions regarding within-household contact appear logical, while estimates for non-household contact draw from a social contact pattern study conducted in Malawi. It's important to talk about why these assumptions and estimates are important and how they affect the proposed vaccine strategy. It would be interesting to discuss the C.I. patterns shown in Figure 1. Specifically, we could look at whether the variability decreases for certain vaccine allocation strategies after a certain number of days. Notably, in Figure 1(b), why does the curve based on ring vaccination exhibit such a narrow range between 80 and 100 (around 90-95)?
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
N0qlvDjnEv
KDD.org/2023/Workshop/epiDAMIK
2023
Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation
["Dinh Song An Nguyen", "Marie-Laure Charpignon", "Kathryn L Schaber", "Maimuna S. Majumder", "Andrew Perrault"]
Throughout an infectious disease crisis, resources that can be used to slow and prevent spread are often scarce or expensive. Designing control policies to optimally allocate these resources to maximize objectives is challenging. Here, we study the case of ring vaccination, a strategy that is used to control the spread of infection by vaccinating the contacts of identified infected individuals and their contacts of contacts. Using agent-based modeling to simulate an Ebola outbreak, we introduce a risk-based ring vaccination strategy in which individuals in a ring are prioritized based on their relative infection risks. Assuming the risk of transmission by contact type is known and a fixed supply of vaccine doses is available on each day, we compared this strategy to ring vaccination without prioritization and randomized vaccination. We find that risk-based ring vaccination offers a substantial advantage over standard ring vaccination when the number of doses are limited, including reducing the daily infected count and death count, and shifting the pandemic peak by a considerable amount of time. We believe that control policies based on estimated risk can often offer significant benefits without increasing the burden of administering the policy by an unacceptable amount.
["agent-based modeling", "ring vaccination", "Ebola", "public health"]
Risk-Based Ring Vaccination: A Strategy for PandemicControl and Vaccine AllocationDinh Song An NguyenThe Ohio State UniversityColumbus, Ohio, [email protected] CharpignonMITCambridge, Massachusetts, [email protected] L SchaberBoston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Shahnaz Majumder∗Boston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Perrault∗The Ohio State UniversityColumbus, Ohio, [email protected] an infectious disease crisis, resources that canbe used to slow and prevent spread are often scarce or expen-sive. Designing control policies to optimally allocate theseresources to maximize objectives is challenging. Here, westudy the case of ring vaccination, a strategy that is used tocontrol the spread of infection by vaccinating the contacts ofidentified infected individuals and their contacts of contacts.Using agent-based modeling to simulate an Ebola outbreak,we introduce a risk-based ring vaccination strategy in whichindividuals in a ring are prioritized based on their relativeinfection risks. Assuming the risk of transmission by con-tact type is known and a fixed supply of vaccine doses isavailable on each day, we compared this strategy to ring vac-cination without prioritization and randomized vaccination.We find that risk-based ring vaccination offers a substantialadvantage over standard ring vaccination when the numberof doses are limited, including reducing the daily infectedcount and death count, and shifting the pandemic peak by aconsiderable amount of time. We believe that control policiesbased on estimated risk can often offer significant benefitswithout increasing the burden of administering the policyby an unacceptable amount.Keywords: agent-based modeling, ring vaccination, Ebola,public health∗These authors co-supervised this research.Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).ACM Reference Format:Dinh Song An Nguyen, Marie Charpignon, Kathryn L Schaber,Maimuna Shahnaz Majumder, and Andrew Perrault. 2023. Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vac-cine Allocation. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 6 pages.1 IntroductionDesigning control policies for infectious disease outbreakscan be challenging for several reasons, including scientificuncertainty surrounding newly emerging diseases, manyobjectives that can be in tension with each other, and limitedaccess to labor and other critical resources. In this paper,we consider the case of ring vaccination , a vaccination deliv-ery strategy that is employed when the supply of vaccinesand the labor required to administer them is limited. Ringvaccination vaccinates individuals within a ring, contactsand contacts of contacts of an infected case. Given a vaccinewith appropriate properties, especially the ability to safelyinoculate an individual who has been recently exposed, ringvaccination can be highly effective. It has been used as a keytool in several Ebola and smallpox outbreaks [2, 6, 7].Ring vaccination functions by targeting individuals whowould be at a higher level of risk of developing the infec-tion, relative to the general population. For example, in the(early/late) stages of Ebola outbreak of Gulu district, Ugandain 2000, the attack rate across the population was roughly0.126% [12]. However, the secondary attack rate (SAR), de-fined as the probability that an infection occurs among sus-ceptible people within a specific set of contacts, can betterreflect the relation between social interactions and transmis-sion risk [ 10]. Yang et al . [15] estimate its value at 2.5%—thus,a vaccine administered immediately after exposure wouldbe about 20 times more effective compared to a randomlydelivered vaccination.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew PerraultHowever, not all individuals in a ring have the same in-fection risk. For instance, contacts of contacts are less likely,on average, to become infected because transmission mustoccur twice. Many observable and unobservable factors maycontribute to this risk, including the type and duration ofcontact between individuals, biological differences that makesome people more effective transmitters, multiple exposurepaths, and behavioral differences that are caused by the pres-ence or absence of public health monitoring (i.e., immediateself isolation at symptom onset).Like other control policies that target individuals withelevated risk such as contact tracing, ring vaccination facesa fundamental challenge that the number of such individu-als is roughly linear in the number of infected individuals,which varies by orders of magnitude throughout a crisis,but the amount of supplies and labor available per day isroughly fixed. We argue that control policies can leverageestimated risk to prioritize vaccine dose allocation, yieldingbetter performance when supplies are scarce. To that end, wepropose a risk-based ring vaccination strategy that leveragesthe differing risks associated with different contact types,information that can be easily elicited as part of contacttracing.We evaluate the risk-based ring strategy in an agent-basedmodel (ABM) and consider Ebola as the case study becauseof its unique transmission intensity bases on type of contact.We show that, when doses are highly restricted, risk-basedring vaccination yields significant benefits over standardring vaccination and randomized vaccination by not onlyreducing overall transmissions and deaths but also shiftingthe pandemic peak. We find that the extra risk associatedwith ring membership is quickly diluted as there are manymore contacts of contacts than contacts, and most contactshave little transmission chance associated with them.2 Agent-based modelWe develop an ABM for Ebola Virus Disease (EVD) withN=14652 agents (Table 1). We model two agent characteris-tics that influence spread and mortality: age and householdmembership. We replicate the household structure and agedistributions from Dodd et al . [5], who collected data in Zam-bia and South Africa in 2005-2006, and again in Zambia in2011. Each agent is in one of the six following discrete stateson each day: Susceptible (S), Incubating(IC), Infectious(I),Vaccinated but not yet immune (V), Deceased(D), and Re-moved (immune or recovered) (R). StateScomprises agentswho have not yet received a vaccine or become immune.StateIcomprises agents who are capable of transmittingEVD to their contacts who are currently in S. At the endof their infectious period, agents in state Itransition intostateDor stateR, depending on Pr(D|age). We estimate theage-specific probability of death using previously reportedcase fatality rates (CFR) of EVD for different age groups [ 14].Contacts are sampled daily. We sample household andnon-household contacts separately. We assume that contactsbetween each pair of individuals within a household occursevery day. Non-household contacts are sampled from thepopulation according to the inter-household contact matrixfrom Ozella et al . [13] , collected in a village in rural Malawi,accounting for the age of the person. We assume that thenumber of contacts follows an independent Poisson distri-bution for each age-age contact pair.Each contact has an associated exposure type. For house-hold contacts, we use and sample the exposure types andtheir distributions observed by Bower et al . [1], which in-clude handling fluids, direct and indirect wet and dry con-tacts, and minimal to no contact. Direct contact refers tosituation in which individuals come into direct contact, suchas touching and caring for a patient diagnosed with EVD,whereas an indirect contact refers to situations such as wash-ing clothes or sharing the same bed with an EVD positivepatient. In addition, wet contact refers to contact with anEVD patient that is symptomatic (e.g. vomiting, bleeding,etc.) while dry contact refers to contact with patients with-out any symptoms. Each type of contact associates with adifferent risk level. For example, a direct contact with fluidsis associated with a higher risk of transmission than a dry,physical contact. We let Wx,y,t represent the risk ratio ofthe contact between agents xandy. For household contacts,it is the age-adjusted risk ratio from Bower et al . [1]. Fornon-household contacts, we assign the same type to each,with a risk ratio we set to match with the non-householdSAR reported in Dixon et al . [4] (see Inferred parameters).Wx,y,t=0if no contact occurred.We define the probability of transmission from agent xtoagentyon daytasPr(base)·Wx,y,twherePr(base)is an inferred baseline probability of infec-tion. The process for inferring this parameter is described inthe next section.Vaccination. The 2017 Guinea ring vaccination trial demon-strates that the vaccine we considered in our simulations(rVSV-ZEBOV) is safe to administer to individuals who areincubating, but do not yet show symptoms [ 6]. Moreover,rVSV-ZEBOV has 100% effectiveness if administered afterexposure. Therefore, we assume that agents in state ICandSare eligible for vaccination. After vaccination, they transi-tion to state V, and nine days later, they transition to stateR, where agents are considered immune.Inferred parameters. We need to infer the parametersPr(base)andRR(non-household), the non-household riskratio, from data. Pr(base)can be interpreted as the probabil-ity of transmission for a household contact of the minimalcontact type. We set this value in order to match the sec-ondary attack rate (SAR) of the ABM to the SAR that wasRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1. Parameters for the ABM.Parameters Values ReferencesEbola dynamicsIncubation period Lognormal: μ=2.446days,σ=0.284 Legrand et al. [9]Infectious period Lognormal: μ=2.2915 days,σ=0.1332 Legrand et al. [9]Case fatality rate Ages < 15: 77.8% Qin et al. [14]Ages 15 - 59: 85.87%Ages > 59: 95.7%Time from vaccination to immunity 9days Kucharski et al. [8]Household secondary attack rate 12.3% Dixon et al. [4]Non-household secondary attack rate 4.8% Dixon et al. [4]Non-household contact matrix Adults-Children: Poisson, λ=1.2 Ozella et al. [13]Adults-Adolescents: Poisson, λ=1.5Adults-Adults: Poisson, λ=5.3Adolescents-Children: Poisson, λ=2.0Adolescents-Adolescents: Poisson, λ=3.6Children-Children: Poisson, λ=0.2Inferred model parametersBase probability of transmission 0.01962 Inferred from Bower et al. [1]Contact type distribution (household) Handled fluids: 16.3%,RR: 9.7 Bower et al. [1]and risk ratios (RR) Direct wet contact: 40.3%,RR: 8.3Direct dry contact: 17%,RR: 5.6Indirect wet contact: 2.6%,RR: 4.9Indirect dry contact: 10%,RR: 1.3Minimal contact: 13.8%,RR: 1Risk ratio for non-household 2.45 Inferred from Equation 2previously reported for Ebola. Specifically, we solve the fol-lowing equation for Pr(base)SARhh=Pr(base)∑︁iPr(i|household contact)RR(i),(1)wherePr(i)is the probability of a contact having type i,RR(i)is the risk ratio associated with contact type i. Thisresults inPr(base)=0.01962 . WithPr(base)identified, wecan solve for RR(non-household):SAR non-hh=Pr(base)RR(non-household), (2)resulting in RR(non-household)=2.45, an intensity be-tween indirect wet and indirect dry contact.3 Risk-based ring vaccinationIn the risk-based ring vaccination strategy, we prioritizethe limited vaccine doses to agents within a ring with thehighest estimated risks. The estimation strategy for risksneeds to be simple and only use information that is easy toobserve. Specifically, we propose estimating risks based oncontact type and household membership and doing so onlywithin a ring—thus, there are at most two contact eventsthat contribute to any estimated risk. We assume that risksare estimated separately for each ring and that there is nocoordination between rings. Risks are updated for each indi-vidual at most once—we update them for contacts of contactsif the contact becomes infected.We define a ring as the contacts and contacts of contacts ofthe infected agent. Let xdenote the seed case for the ring, ydenote a contact of x, andzdenote a contact of y. We definethe risk foryasR(y)=Pr(base)·Wx,y, (3)whereWx,yis the risk ratio associated with the highest inten-sity contact between xandyafterxdeveloped symptoms,i.e.,maxtWx,y,t withtinx’s infectious period. For z, wedefine the risk asR(z|yis not infected)=Pr(base)·Wx,y·Pr(base)·Wy,z(4)R(z|yis infected)=Pr(base)·Wy,z, (5)using equation 4 if yis not known to be infected and updatingto use equation 5 if ybecomes infected.Individuals in the ring are then vaccinated in order of theirrisk ranking, i.e., each day the Uunvaccinated individualswho do not have symptoms with highest risk are vaccinated.If there are still some vaccines left after everyone in the ringhas been vaccinated, which can happen when individuals areepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perraultunreachable during the vaccination process or in the laterstage of the outbreak, then the remaining vaccines will berandomly distributed to the susceptible agents that are notin the identified clusters.4 Preliminary resultsWe compare the risk-based ring vaccination approach tothree baselines: random vaccination, full ring vaccination,and no prioritization ring vaccination. All baselines vacci-nate only individuals that have no symptoms and are un-vaccinated (i.e., individuals in states SandIC). In randomvaccination ,Uindividuals are vaccinated at random eachday. In no prioritization ring ,Uindividuals that are in a ringare vaccinated and any leftover vaccines are randomly dis-tributed. In full ring ,allindividuals in a ring are vaccinated,relaxing the constraint of Uvaccines per day. In all cases,each individual has a 30% to be unreachable (as in [ 8]). Thedose that would go to that individual instead goes to thenext eligible agent (i.e., the next highest risk in risk-basedor another agent in the ring in no prioritization ring). Wesimulate the ABM with 10 seed cases selected uniformly atrandom from the population.By ranking individuals who are at most at risk, risk-basedring vaccination substantially reduces the infected numberof infections and deaths (Fig. 1 and Tab. 2). However, theimpact of risk-based prioritization varies significantly acrossdose limits. In all dose limits, we see a statistically significantdifference between risk-based prioritization and standardring vaccination. This difference is most salient for moderatedose limits—for 100 daily doses, risk-based reduces deathsby roughly 2times that of randomized vaccination and 1.8times for no prioritization ring. With 200 doses available,both risk-based and no-prioritization ring differ substantiallyfrom randomized vaccination, whereas in 50 and 100 doses,no prioritization ring and random achieve relatively similarperformance. In the case of 50 daily doses, risk-based ring hasa smaller impact on the number of infections and deaths ( <9%relative to random). However, we see substantial shiftingof the infection curve in this setting, delaying the peak byabout 20 days.The full ring strategy (without dose limit) results in fewdeaths as the vaccine for EVD is highly effective even whenadministered after exposure, even when 30% of contacts areunreachable at the time of vaccination. However, the costof this performance is the need for a surge of vaccination inthe first month of 321±179doses per day. This approachachieves control early resulting in an average of 111±152daily doses across the whole period.5 Discussion and Future WorkCreating control policies during an outbreak is challengingdue to resource constraints such as limited healthcare per-sonnel and medical supplies. Using an ABM, we study theimpact of ring vaccination strategies under a daily dose limit,and consider EVD as the case study, specifically. We find that,even with vaccination-infection combination that is highlysuited to ring vaccination, ring vaccination has limited im-pact on new infections relative to random vaccination untilthe number of doses available is sufficiently high. Moreover,the implementation of risk-based ring vaccination we con-sider only requires slightly more information (contact types),but has an impact even at much lower numbers of delivereddoses.It is expected to observe phase transitions in vaccinationprograms due to the exponential dynamics involved in in-fections: when the number of daily vaccine doses passes athreshold, infections will decay exponentially, and the out-break can be contained. However, this intuition does notapply directly to ring vaccination. Despite the ability of ringvaccination to identify individuals who have a higher riskof infection than the broader population, the impact on newinfections is relatively modest. A small modification of stan-dard ring vaccination—involving risk-based prioritizationamong documented contacts—induces dramatically differentbehavior. Specifically, for a small number of doses (Fig. 1), arisk-based approach yields a shift in the time at which thepeak in new infections is reached, thus postponing a surgemore efficiently than standard ring vaccination and random-ized vaccination. Moreover, above a certain threshold, lyingbetween 50 and 100 daily doses in our model, benefits of therisk-based approach compound and the shift in the timingof the peak is coupled with a significant reduction in themaximum number of new infections. These two distinct ef-fects and their potential coupling are not well understoodand merit further study.A key question is whether more sophisticated vaccinationstrategies such as ring vaccination are worth the additionaloverhead cost of reliably identifying and contact tracingcases. The answer to this question is multi-faceted and willdepend on the interplay among outbreak stage, vaccine avail-ability, and the combination of vaccination and infectionproperties. More effort is needed to understand these inter-actions: during an infectious disease emergency, resourcesare scarce and need to be allocated towards the geographicalareas or subpopulations that result in the highest impacts,i.e., the largest reduction in the maximum number of newinfections and the greatest delay in the timing of the peak.Our study has several limitations. Our current ABM doesnot incorporate realistic superspreading dynamics. Yet manyinfectious diseases demonstrate a high degree of transmis-sion heterogeneity, i.e., relatively few seed cases cause manysecondary infections [ 11]. While not well captured in ourmodel, this aspect has substantial consequences for ring vac-cination because the variance of the strategy’s outcome isincreased, i.e., a single missed secondary case can have aRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) 50 doses (b) 100 doses (c) 200 dosesFigure 1. The daily mean count ( ±standard deviation) of infected under different vaccination strategies. We simulate outbreakswith 10 seed cases for each policy given different numbers of vaccine availability. The shaded region indicates the standarddeviation for each vaccination strategy.Table 2. Mean (95% CI) count of deceased for each strategy and dose limit.Strategy 50 doses 100 doses 200 dosesRisk-based ring 8465.77 3268.67 175.77(8370.63–8560.91) (1399.83–5137.50) (144.14–207.4)No prioritization ring 9184 6091.50 784.7(9101.12–9266.88) (5915.62–6267.38) (663.08–906.32)Random 9272.33 6488.57 2044.4(9164.44.35–9380.22) (6425.06–6552.09) (1627.39–2461.41)Full ring 27.33(no dose limit) (10.79–43.87)No vaccination 12189.80(12156.43–12223.17)much larger impact on the timing of the peak in new in-fections and its magnitude than in the absence of transmis-sion heterogeneity. We suspect that accounting for super-spreading events would further reduce the benefits of ringvaccination. However, in some circumstances, pronouncedsuperspreading can make risk-based targeting more effectiveas observations from a given ring can be used to infer thetransmission potential of the seed case.Furthermore, it is already a hard task to gather contactsand contacts of contacts to form a ring for vaccination. Ob-taining information regarding exposure types between in-fected individuals and their contacts is even more time andresource intensive. Although risk-based ring vaccination ismore effective in our results, it is important to consider ad-ditional factors like timing and human resources in order tobetter evaluate the efficacy of our method.By design, ring vaccination targets individuals with ahigher number of contacts or more centrally located in anetwork. These individuals tend to get infected earlier thantheir counterparts with an average number of contacts andcentrality [ 3].Risk-based ring vaccination, by prioritizingindividuals with contacts at higher risk, will additionally tar-get individuals in larger households. This additional featureoperates independently from the “encirclement” aspect ofstandard ring vaccination; more work is needed to quantifytheir respective contributions (e.g., by comparing risk-basedvaccination to strategies that prioritize individuals based onhousehold size).AcknowledgmentsKS was supported in part by grant SES2200228 from theNational Science Foundation. MSM was supported in part bygrant R35GM146974 from the National Institute of GeneralMedical Sciences, National Institutes of Health. The fundershad no role in study design, data collection and analysis,decision to publish, or preparation of the manuscript.References[1]Hilary Bower, Sembia Johnson, Mohamed S Bangura, Alie JoshuaKamara, Osman Kamara, Saidu H Mansaray, Daniel Sesay, CeciliaTuray, Francesco Checchi, and Judith R Glynn. 2016. Exposure-specificand age-specific attack rates for Ebola virus disease in Ebola-affectedhouseholds, Sierra Leone. Emerging infectious diseases 22, 8 (2016),1403.[2]Ebola ça Suffit Ring Vaccination Trial Consortium. 2015. The ringvaccination trial: a novel cluster randomised controlled trial designto evaluate vaccine efficacy and effectiveness during outbreaks, withspecial reference to Ebola. BMJ: British Medical Journal 351 (2015),h3740.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perrault[3]Nicholas A Christakis and James H Fowler. 2010. Social networksensors for early detection of contagious outbreaks. PloS one 5, 9(2010), e12948.[4]Meredith G Dixon, Melanie M Taylor, Jacob Dee, Avi Hakim, PaulCantey, Travis Lim, Hawa Bah, Sékou Mohamed Camara, Clement BNdongmo, Mory Togba, et al .2015. Contact tracing activities duringthe Ebola virus disease epidemic in Kindia and Faranah, Guinea, 2014.Emerging infectious diseases 21, 11 (2015), 2022.[5]Peter J Dodd, Clare Looker, Ian D Plumb, Virginia Bond, Ab Schaap,Kwame Shanaube, Monde Muyoyeta, Emilia Vynnycky, Peter Godfrey-Faussett, Elizabeth L Corbett, et al .2016. Age-and sex-specific socialcontact patterns and incidence of Mycobacterium tuberculosis infec-tion. American journal of epidemiology 183, 2 (2016), 156–166.[6]Ana Maria Henao-Restrepo, Anton Camacho, Ira M Longini, Conall HWatson, W John Edmunds, Matthias Egger, Miles W Carroll, Natalie EDean, Ibrahima Diatta, Moussa Doumbia, et al .2017. Efficacy andeffectiveness of an rVSV-vectored vaccine in preventing Ebola virusdisease: final results from the Guinea ring vaccination, open-label,cluster-randomised trial (Ebola Ça Suffit!). The Lancet 389, 10068(2017), 505–518.[7]Mirjam Kretzschmar, Susan Van den Hof, Jacco Wallinga, and JanVan Wijngaarden. 2004. Ring vaccination and smallpox control. Emerg-ing infectious diseases 10, 5 (2004), 832.[8]Adam J Kucharski, Rosalind M Eggo, Conall H Watson, Anton Cama-cho, Sebastian Funk, and W John Edmunds. 2016. Effectiveness ofring vaccination as control strategy for Ebola virus disease. Emerginginfectious diseases 22, 1 (2016), 105.[9]Judith Legrand, Rebecca Freeman Grais, Pierre-Yves Boelle, Alain-Jacques Valleron, and Antoine Flahault. 2007. Understanding thedynamics of Ebola epidemics. Epidemiology & Infection 135, 4 (2007),610–621.[10] Yang Liu, Rosalind M Eggo, and Adam J Kucharski. 2020. Secondaryattack rate and superspreading events for SARS-CoV-2. The Lancet395, 10227 (2020), e47.[11] James O Lloyd-Smith, Sebastian J Schreiber, P Ekkehard Kopp, andWayne M Getz. 2005. Superspreading and the effect of individualvariation on disease emergence. Nature 438, 7066 (2005), 355–359.[12] SI Okware, FG Omaswa, S Zaramba, A Opio, JJ Lutwama, J Kamugisha,EB Rwaguma, P Kagwa, and M Lamunu. 2002. An outbreak of Ebolain Uganda. Tropical Medicine & International Health 7, 12 (2002), 1068–1075.[13] Laura Ozella, Daniela Paolotti, Guilherme Lichand, Jorge P Rodríguez,Simon Haenni, John Phuka, Onicio B Leal-Neto, and Ciro Cattuto.2021. Using wearable proximity sensors to characterize social contactpatterns in a village of rural Malawi. EPJ Data Science 10, 1 (2021), 46.[14] Enqiang Qin, Jingfeng Bi, Min Zhao, Ye Wang, Tongsheng Guo, TaoYan, Zhiwei Li, Juan Sun, Jieli Zhang, Suhong Chen, et al .2015. Clinicalfeatures of patients with Ebola virus disease in Sierra Leone. Clinicalinfectious diseases 61, 4 (2015), 491–495.[15] Yingrui Yang, Ashley McKhann, Sixing Chen, Guy Harling, and Jukka-Pekka Onnela. 2019. Efficient vaccination strategies for epidemiccontrol using network information. Epidemics 27 (2019), 115–122.
Ivf4OF0X2I
This paper proposed a risk-based ring vaccination method that achieves better performance than the existing no-prioritization ring method and random method.
4: Good paper, accept
This paper proposed a risk-based ring vaccination method that achieves better performance than the existing no-prioritization ring method and random method. Strength: 1. Good motivation: The idea of risk-based vaccination allows more effective vaccine distribution. 2. The experience showcases the effectiveness of the proposed risk-based ring vaccination method. Weakness: 1. Only one experiment setup is used in experiments for evaluation. Another experiment for other diseases, or at least one other model, is more useful to better showcase the proposed method. 2. The vaccine budget (50/100/200) seems a little random. A better budget based on real-world Ebola vaccine production rate is more useful to showcase the effectiveness of the proposed method in the application
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
N0qlvDjnEv
KDD.org/2023/Workshop/epiDAMIK
2023
Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation
["Dinh Song An Nguyen", "Marie-Laure Charpignon", "Kathryn L Schaber", "Maimuna S. Majumder", "Andrew Perrault"]
Throughout an infectious disease crisis, resources that can be used to slow and prevent spread are often scarce or expensive. Designing control policies to optimally allocate these resources to maximize objectives is challenging. Here, we study the case of ring vaccination, a strategy that is used to control the spread of infection by vaccinating the contacts of identified infected individuals and their contacts of contacts. Using agent-based modeling to simulate an Ebola outbreak, we introduce a risk-based ring vaccination strategy in which individuals in a ring are prioritized based on their relative infection risks. Assuming the risk of transmission by contact type is known and a fixed supply of vaccine doses is available on each day, we compared this strategy to ring vaccination without prioritization and randomized vaccination. We find that risk-based ring vaccination offers a substantial advantage over standard ring vaccination when the number of doses are limited, including reducing the daily infected count and death count, and shifting the pandemic peak by a considerable amount of time. We believe that control policies based on estimated risk can often offer significant benefits without increasing the burden of administering the policy by an unacceptable amount.
["agent-based modeling", "ring vaccination", "Ebola", "public health"]
Risk-Based Ring Vaccination: A Strategy for PandemicControl and Vaccine AllocationDinh Song An NguyenThe Ohio State UniversityColumbus, Ohio, [email protected] CharpignonMITCambridge, Massachusetts, [email protected] L SchaberBoston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Shahnaz Majumder∗Boston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, [email protected] Perrault∗The Ohio State UniversityColumbus, Ohio, [email protected] an infectious disease crisis, resources that canbe used to slow and prevent spread are often scarce or expen-sive. Designing control policies to optimally allocate theseresources to maximize objectives is challenging. Here, westudy the case of ring vaccination, a strategy that is used tocontrol the spread of infection by vaccinating the contacts ofidentified infected individuals and their contacts of contacts.Using agent-based modeling to simulate an Ebola outbreak,we introduce a risk-based ring vaccination strategy in whichindividuals in a ring are prioritized based on their relativeinfection risks. Assuming the risk of transmission by con-tact type is known and a fixed supply of vaccine doses isavailable on each day, we compared this strategy to ring vac-cination without prioritization and randomized vaccination.We find that risk-based ring vaccination offers a substantialadvantage over standard ring vaccination when the numberof doses are limited, including reducing the daily infectedcount and death count, and shifting the pandemic peak by aconsiderable amount of time. We believe that control policiesbased on estimated risk can often offer significant benefitswithout increasing the burden of administering the policyby an unacceptable amount.Keywords: agent-based modeling, ring vaccination, Ebola,public health∗These authors co-supervised this research.Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).ACM Reference Format:Dinh Song An Nguyen, Marie Charpignon, Kathryn L Schaber,Maimuna Shahnaz Majumder, and Andrew Perrault. 2023. Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vac-cine Allocation. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 6 pages.1 IntroductionDesigning control policies for infectious disease outbreakscan be challenging for several reasons, including scientificuncertainty surrounding newly emerging diseases, manyobjectives that can be in tension with each other, and limitedaccess to labor and other critical resources. In this paper,we consider the case of ring vaccination , a vaccination deliv-ery strategy that is employed when the supply of vaccinesand the labor required to administer them is limited. Ringvaccination vaccinates individuals within a ring, contactsand contacts of contacts of an infected case. Given a vaccinewith appropriate properties, especially the ability to safelyinoculate an individual who has been recently exposed, ringvaccination can be highly effective. It has been used as a keytool in several Ebola and smallpox outbreaks [2, 6, 7].Ring vaccination functions by targeting individuals whowould be at a higher level of risk of developing the infec-tion, relative to the general population. For example, in the(early/late) stages of Ebola outbreak of Gulu district, Ugandain 2000, the attack rate across the population was roughly0.126% [12]. However, the secondary attack rate (SAR), de-fined as the probability that an infection occurs among sus-ceptible people within a specific set of contacts, can betterreflect the relation between social interactions and transmis-sion risk [ 10]. Yang et al . [15] estimate its value at 2.5%—thus,a vaccine administered immediately after exposure wouldbe about 20 times more effective compared to a randomlydelivered vaccination.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew PerraultHowever, not all individuals in a ring have the same in-fection risk. For instance, contacts of contacts are less likely,on average, to become infected because transmission mustoccur twice. Many observable and unobservable factors maycontribute to this risk, including the type and duration ofcontact between individuals, biological differences that makesome people more effective transmitters, multiple exposurepaths, and behavioral differences that are caused by the pres-ence or absence of public health monitoring (i.e., immediateself isolation at symptom onset).Like other control policies that target individuals withelevated risk such as contact tracing, ring vaccination facesa fundamental challenge that the number of such individu-als is roughly linear in the number of infected individuals,which varies by orders of magnitude throughout a crisis,but the amount of supplies and labor available per day isroughly fixed. We argue that control policies can leverageestimated risk to prioritize vaccine dose allocation, yieldingbetter performance when supplies are scarce. To that end, wepropose a risk-based ring vaccination strategy that leveragesthe differing risks associated with different contact types,information that can be easily elicited as part of contacttracing.We evaluate the risk-based ring strategy in an agent-basedmodel (ABM) and consider Ebola as the case study becauseof its unique transmission intensity bases on type of contact.We show that, when doses are highly restricted, risk-basedring vaccination yields significant benefits over standardring vaccination and randomized vaccination by not onlyreducing overall transmissions and deaths but also shiftingthe pandemic peak. We find that the extra risk associatedwith ring membership is quickly diluted as there are manymore contacts of contacts than contacts, and most contactshave little transmission chance associated with them.2 Agent-based modelWe develop an ABM for Ebola Virus Disease (EVD) withN=14652 agents (Table 1). We model two agent characteris-tics that influence spread and mortality: age and householdmembership. We replicate the household structure and agedistributions from Dodd et al . [5], who collected data in Zam-bia and South Africa in 2005-2006, and again in Zambia in2011. Each agent is in one of the six following discrete stateson each day: Susceptible (S), Incubating(IC), Infectious(I),Vaccinated but not yet immune (V), Deceased(D), and Re-moved (immune or recovered) (R). StateScomprises agentswho have not yet received a vaccine or become immune.StateIcomprises agents who are capable of transmittingEVD to their contacts who are currently in S. At the endof their infectious period, agents in state Itransition intostateDor stateR, depending on Pr(D|age). We estimate theage-specific probability of death using previously reportedcase fatality rates (CFR) of EVD for different age groups [ 14].Contacts are sampled daily. We sample household andnon-household contacts separately. We assume that contactsbetween each pair of individuals within a household occursevery day. Non-household contacts are sampled from thepopulation according to the inter-household contact matrixfrom Ozella et al . [13] , collected in a village in rural Malawi,accounting for the age of the person. We assume that thenumber of contacts follows an independent Poisson distri-bution for each age-age contact pair.Each contact has an associated exposure type. For house-hold contacts, we use and sample the exposure types andtheir distributions observed by Bower et al . [1], which in-clude handling fluids, direct and indirect wet and dry con-tacts, and minimal to no contact. Direct contact refers tosituation in which individuals come into direct contact, suchas touching and caring for a patient diagnosed with EVD,whereas an indirect contact refers to situations such as wash-ing clothes or sharing the same bed with an EVD positivepatient. In addition, wet contact refers to contact with anEVD patient that is symptomatic (e.g. vomiting, bleeding,etc.) while dry contact refers to contact with patients with-out any symptoms. Each type of contact associates with adifferent risk level. For example, a direct contact with fluidsis associated with a higher risk of transmission than a dry,physical contact. We let Wx,y,t represent the risk ratio ofthe contact between agents xandy. For household contacts,it is the age-adjusted risk ratio from Bower et al . [1]. Fornon-household contacts, we assign the same type to each,with a risk ratio we set to match with the non-householdSAR reported in Dixon et al . [4] (see Inferred parameters).Wx,y,t=0if no contact occurred.We define the probability of transmission from agent xtoagentyon daytasPr(base)·Wx,y,twherePr(base)is an inferred baseline probability of infec-tion. The process for inferring this parameter is described inthe next section.Vaccination. The 2017 Guinea ring vaccination trial demon-strates that the vaccine we considered in our simulations(rVSV-ZEBOV) is safe to administer to individuals who areincubating, but do not yet show symptoms [ 6]. Moreover,rVSV-ZEBOV has 100% effectiveness if administered afterexposure. Therefore, we assume that agents in state ICandSare eligible for vaccination. After vaccination, they transi-tion to state V, and nine days later, they transition to stateR, where agents are considered immune.Inferred parameters. We need to infer the parametersPr(base)andRR(non-household), the non-household riskratio, from data. Pr(base)can be interpreted as the probabil-ity of transmission for a household contact of the minimalcontact type. We set this value in order to match the sec-ondary attack rate (SAR) of the ABM to the SAR that wasRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1. Parameters for the ABM.Parameters Values ReferencesEbola dynamicsIncubation period Lognormal: μ=2.446days,σ=0.284 Legrand et al. [9]Infectious period Lognormal: μ=2.2915 days,σ=0.1332 Legrand et al. [9]Case fatality rate Ages < 15: 77.8% Qin et al. [14]Ages 15 - 59: 85.87%Ages > 59: 95.7%Time from vaccination to immunity 9days Kucharski et al. [8]Household secondary attack rate 12.3% Dixon et al. [4]Non-household secondary attack rate 4.8% Dixon et al. [4]Non-household contact matrix Adults-Children: Poisson, λ=1.2 Ozella et al. [13]Adults-Adolescents: Poisson, λ=1.5Adults-Adults: Poisson, λ=5.3Adolescents-Children: Poisson, λ=2.0Adolescents-Adolescents: Poisson, λ=3.6Children-Children: Poisson, λ=0.2Inferred model parametersBase probability of transmission 0.01962 Inferred from Bower et al. [1]Contact type distribution (household) Handled fluids: 16.3%,RR: 9.7 Bower et al. [1]and risk ratios (RR) Direct wet contact: 40.3%,RR: 8.3Direct dry contact: 17%,RR: 5.6Indirect wet contact: 2.6%,RR: 4.9Indirect dry contact: 10%,RR: 1.3Minimal contact: 13.8%,RR: 1Risk ratio for non-household 2.45 Inferred from Equation 2previously reported for Ebola. Specifically, we solve the fol-lowing equation for Pr(base)SARhh=Pr(base)∑︁iPr(i|household contact)RR(i),(1)wherePr(i)is the probability of a contact having type i,RR(i)is the risk ratio associated with contact type i. Thisresults inPr(base)=0.01962 . WithPr(base)identified, wecan solve for RR(non-household):SAR non-hh=Pr(base)RR(non-household), (2)resulting in RR(non-household)=2.45, an intensity be-tween indirect wet and indirect dry contact.3 Risk-based ring vaccinationIn the risk-based ring vaccination strategy, we prioritizethe limited vaccine doses to agents within a ring with thehighest estimated risks. The estimation strategy for risksneeds to be simple and only use information that is easy toobserve. Specifically, we propose estimating risks based oncontact type and household membership and doing so onlywithin a ring—thus, there are at most two contact eventsthat contribute to any estimated risk. We assume that risksare estimated separately for each ring and that there is nocoordination between rings. Risks are updated for each indi-vidual at most once—we update them for contacts of contactsif the contact becomes infected.We define a ring as the contacts and contacts of contacts ofthe infected agent. Let xdenote the seed case for the ring, ydenote a contact of x, andzdenote a contact of y. We definethe risk foryasR(y)=Pr(base)·Wx,y, (3)whereWx,yis the risk ratio associated with the highest inten-sity contact between xandyafterxdeveloped symptoms,i.e.,maxtWx,y,t withtinx’s infectious period. For z, wedefine the risk asR(z|yis not infected)=Pr(base)·Wx,y·Pr(base)·Wy,z(4)R(z|yis infected)=Pr(base)·Wy,z, (5)using equation 4 if yis not known to be infected and updatingto use equation 5 if ybecomes infected.Individuals in the ring are then vaccinated in order of theirrisk ranking, i.e., each day the Uunvaccinated individualswho do not have symptoms with highest risk are vaccinated.If there are still some vaccines left after everyone in the ringhas been vaccinated, which can happen when individuals areepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perraultunreachable during the vaccination process or in the laterstage of the outbreak, then the remaining vaccines will berandomly distributed to the susceptible agents that are notin the identified clusters.4 Preliminary resultsWe compare the risk-based ring vaccination approach tothree baselines: random vaccination, full ring vaccination,and no prioritization ring vaccination. All baselines vacci-nate only individuals that have no symptoms and are un-vaccinated (i.e., individuals in states SandIC). In randomvaccination ,Uindividuals are vaccinated at random eachday. In no prioritization ring ,Uindividuals that are in a ringare vaccinated and any leftover vaccines are randomly dis-tributed. In full ring ,allindividuals in a ring are vaccinated,relaxing the constraint of Uvaccines per day. In all cases,each individual has a 30% to be unreachable (as in [ 8]). Thedose that would go to that individual instead goes to thenext eligible agent (i.e., the next highest risk in risk-basedor another agent in the ring in no prioritization ring). Wesimulate the ABM with 10 seed cases selected uniformly atrandom from the population.By ranking individuals who are at most at risk, risk-basedring vaccination substantially reduces the infected numberof infections and deaths (Fig. 1 and Tab. 2). However, theimpact of risk-based prioritization varies significantly acrossdose limits. In all dose limits, we see a statistically significantdifference between risk-based prioritization and standardring vaccination. This difference is most salient for moderatedose limits—for 100 daily doses, risk-based reduces deathsby roughly 2times that of randomized vaccination and 1.8times for no prioritization ring. With 200 doses available,both risk-based and no-prioritization ring differ substantiallyfrom randomized vaccination, whereas in 50 and 100 doses,no prioritization ring and random achieve relatively similarperformance. In the case of 50 daily doses, risk-based ring hasa smaller impact on the number of infections and deaths ( <9%relative to random). However, we see substantial shiftingof the infection curve in this setting, delaying the peak byabout 20 days.The full ring strategy (without dose limit) results in fewdeaths as the vaccine for EVD is highly effective even whenadministered after exposure, even when 30% of contacts areunreachable at the time of vaccination. However, the costof this performance is the need for a surge of vaccination inthe first month of 321±179doses per day. This approachachieves control early resulting in an average of 111±152daily doses across the whole period.5 Discussion and Future WorkCreating control policies during an outbreak is challengingdue to resource constraints such as limited healthcare per-sonnel and medical supplies. Using an ABM, we study theimpact of ring vaccination strategies under a daily dose limit,and consider EVD as the case study, specifically. We find that,even with vaccination-infection combination that is highlysuited to ring vaccination, ring vaccination has limited im-pact on new infections relative to random vaccination untilthe number of doses available is sufficiently high. Moreover,the implementation of risk-based ring vaccination we con-sider only requires slightly more information (contact types),but has an impact even at much lower numbers of delivereddoses.It is expected to observe phase transitions in vaccinationprograms due to the exponential dynamics involved in in-fections: when the number of daily vaccine doses passes athreshold, infections will decay exponentially, and the out-break can be contained. However, this intuition does notapply directly to ring vaccination. Despite the ability of ringvaccination to identify individuals who have a higher riskof infection than the broader population, the impact on newinfections is relatively modest. A small modification of stan-dard ring vaccination—involving risk-based prioritizationamong documented contacts—induces dramatically differentbehavior. Specifically, for a small number of doses (Fig. 1), arisk-based approach yields a shift in the time at which thepeak in new infections is reached, thus postponing a surgemore efficiently than standard ring vaccination and random-ized vaccination. Moreover, above a certain threshold, lyingbetween 50 and 100 daily doses in our model, benefits of therisk-based approach compound and the shift in the timingof the peak is coupled with a significant reduction in themaximum number of new infections. These two distinct ef-fects and their potential coupling are not well understoodand merit further study.A key question is whether more sophisticated vaccinationstrategies such as ring vaccination are worth the additionaloverhead cost of reliably identifying and contact tracingcases. The answer to this question is multi-faceted and willdepend on the interplay among outbreak stage, vaccine avail-ability, and the combination of vaccination and infectionproperties. More effort is needed to understand these inter-actions: during an infectious disease emergency, resourcesare scarce and need to be allocated towards the geographicalareas or subpopulations that result in the highest impacts,i.e., the largest reduction in the maximum number of newinfections and the greatest delay in the timing of the peak.Our study has several limitations. Our current ABM doesnot incorporate realistic superspreading dynamics. Yet manyinfectious diseases demonstrate a high degree of transmis-sion heterogeneity, i.e., relatively few seed cases cause manysecondary infections [ 11]. While not well captured in ourmodel, this aspect has substantial consequences for ring vac-cination because the variance of the strategy’s outcome isincreased, i.e., a single missed secondary case can have aRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) 50 doses (b) 100 doses (c) 200 dosesFigure 1. The daily mean count ( ±standard deviation) of infected under different vaccination strategies. We simulate outbreakswith 10 seed cases for each policy given different numbers of vaccine availability. The shaded region indicates the standarddeviation for each vaccination strategy.Table 2. Mean (95% CI) count of deceased for each strategy and dose limit.Strategy 50 doses 100 doses 200 dosesRisk-based ring 8465.77 3268.67 175.77(8370.63–8560.91) (1399.83–5137.50) (144.14–207.4)No prioritization ring 9184 6091.50 784.7(9101.12–9266.88) (5915.62–6267.38) (663.08–906.32)Random 9272.33 6488.57 2044.4(9164.44.35–9380.22) (6425.06–6552.09) (1627.39–2461.41)Full ring 27.33(no dose limit) (10.79–43.87)No vaccination 12189.80(12156.43–12223.17)much larger impact on the timing of the peak in new in-fections and its magnitude than in the absence of transmis-sion heterogeneity. We suspect that accounting for super-spreading events would further reduce the benefits of ringvaccination. However, in some circumstances, pronouncedsuperspreading can make risk-based targeting more effectiveas observations from a given ring can be used to infer thetransmission potential of the seed case.Furthermore, it is already a hard task to gather contactsand contacts of contacts to form a ring for vaccination. Ob-taining information regarding exposure types between in-fected individuals and their contacts is even more time andresource intensive. Although risk-based ring vaccination ismore effective in our results, it is important to consider ad-ditional factors like timing and human resources in order tobetter evaluate the efficacy of our method.By design, ring vaccination targets individuals with ahigher number of contacts or more centrally located in anetwork. These individuals tend to get infected earlier thantheir counterparts with an average number of contacts andcentrality [ 3].Risk-based ring vaccination, by prioritizingindividuals with contacts at higher risk, will additionally tar-get individuals in larger households. This additional featureoperates independently from the “encirclement” aspect ofstandard ring vaccination; more work is needed to quantifytheir respective contributions (e.g., by comparing risk-basedvaccination to strategies that prioritize individuals based onhousehold size).AcknowledgmentsKS was supported in part by grant SES2200228 from theNational Science Foundation. MSM was supported in part bygrant R35GM146974 from the National Institute of GeneralMedical Sciences, National Institutes of Health. The fundershad no role in study design, data collection and analysis,decision to publish, or preparation of the manuscript.References[1]Hilary Bower, Sembia Johnson, Mohamed S Bangura, Alie JoshuaKamara, Osman Kamara, Saidu H Mansaray, Daniel Sesay, CeciliaTuray, Francesco Checchi, and Judith R Glynn. 2016. Exposure-specificand age-specific attack rates for Ebola virus disease in Ebola-affectedhouseholds, Sierra Leone. Emerging infectious diseases 22, 8 (2016),1403.[2]Ebola ça Suffit Ring Vaccination Trial Consortium. 2015. The ringvaccination trial: a novel cluster randomised controlled trial designto evaluate vaccine efficacy and effectiveness during outbreaks, withspecial reference to Ebola. BMJ: British Medical Journal 351 (2015),h3740.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perrault[3]Nicholas A Christakis and James H Fowler. 2010. Social networksensors for early detection of contagious outbreaks. PloS one 5, 9(2010), e12948.[4]Meredith G Dixon, Melanie M Taylor, Jacob Dee, Avi Hakim, PaulCantey, Travis Lim, Hawa Bah, Sékou Mohamed Camara, Clement BNdongmo, Mory Togba, et al .2015. Contact tracing activities duringthe Ebola virus disease epidemic in Kindia and Faranah, Guinea, 2014.Emerging infectious diseases 21, 11 (2015), 2022.[5]Peter J Dodd, Clare Looker, Ian D Plumb, Virginia Bond, Ab Schaap,Kwame Shanaube, Monde Muyoyeta, Emilia Vynnycky, Peter Godfrey-Faussett, Elizabeth L Corbett, et al .2016. Age-and sex-specific socialcontact patterns and incidence of Mycobacterium tuberculosis infec-tion. American journal of epidemiology 183, 2 (2016), 156–166.[6]Ana Maria Henao-Restrepo, Anton Camacho, Ira M Longini, Conall HWatson, W John Edmunds, Matthias Egger, Miles W Carroll, Natalie EDean, Ibrahima Diatta, Moussa Doumbia, et al .2017. Efficacy andeffectiveness of an rVSV-vectored vaccine in preventing Ebola virusdisease: final results from the Guinea ring vaccination, open-label,cluster-randomised trial (Ebola Ça Suffit!). The Lancet 389, 10068(2017), 505–518.[7]Mirjam Kretzschmar, Susan Van den Hof, Jacco Wallinga, and JanVan Wijngaarden. 2004. Ring vaccination and smallpox control. Emerg-ing infectious diseases 10, 5 (2004), 832.[8]Adam J Kucharski, Rosalind M Eggo, Conall H Watson, Anton Cama-cho, Sebastian Funk, and W John Edmunds. 2016. Effectiveness ofring vaccination as control strategy for Ebola virus disease. Emerginginfectious diseases 22, 1 (2016), 105.[9]Judith Legrand, Rebecca Freeman Grais, Pierre-Yves Boelle, Alain-Jacques Valleron, and Antoine Flahault. 2007. Understanding thedynamics of Ebola epidemics. Epidemiology & Infection 135, 4 (2007),610–621.[10] Yang Liu, Rosalind M Eggo, and Adam J Kucharski. 2020. Secondaryattack rate and superspreading events for SARS-CoV-2. The Lancet395, 10227 (2020), e47.[11] James O Lloyd-Smith, Sebastian J Schreiber, P Ekkehard Kopp, andWayne M Getz. 2005. Superspreading and the effect of individualvariation on disease emergence. Nature 438, 7066 (2005), 355–359.[12] SI Okware, FG Omaswa, S Zaramba, A Opio, JJ Lutwama, J Kamugisha,EB Rwaguma, P Kagwa, and M Lamunu. 2002. An outbreak of Ebolain Uganda. Tropical Medicine & International Health 7, 12 (2002), 1068–1075.[13] Laura Ozella, Daniela Paolotti, Guilherme Lichand, Jorge P Rodríguez,Simon Haenni, John Phuka, Onicio B Leal-Neto, and Ciro Cattuto.2021. Using wearable proximity sensors to characterize social contactpatterns in a village of rural Malawi. EPJ Data Science 10, 1 (2021), 46.[14] Enqiang Qin, Jingfeng Bi, Min Zhao, Ye Wang, Tongsheng Guo, TaoYan, Zhiwei Li, Juan Sun, Jieli Zhang, Suhong Chen, et al .2015. Clinicalfeatures of patients with Ebola virus disease in Sierra Leone. Clinicalinfectious diseases 61, 4 (2015), 491–495.[15] Yingrui Yang, Ashley McKhann, Sixing Chen, Guy Harling, and Jukka-Pekka Onnela. 2019. Efficient vaccination strategies for epidemiccontrol using network information. Epidemics 27 (2019), 115–122.
xdAE5_pGjn
Risk-based ring vaccination appears promising. Need more analysis to test robustness (to demographic patterns) and generality (to other infectious diseases)
2: Marginally below acceptance threshold
The paper explores preliminary work for risk-based ring vaccination as an intervention to control spread of infectious diseases given limited resources. Authors consider the specific case study of Ebola vaccination and compare with multiple protocols: random-vaccination, no-prioritization ring vaccination, full ring vaccination. Simulations show that risk-based ring vaccination is shown to be promising to achieve full-ring benefits with significantly fewer resources. However, significantly more experiments are need to make strong and reliable inferences. Some comments/questions for the authors to think about: 1. Risk-based ring vaccination seems very sensitive to mobility patterns and presence of NPIs (like lockdowns). Does this only work when communities are sparse and isolated or with active mobility patterns. How do you form a ring then? Authors should simulate with real-scale populations, with dynamic movement patterns and calibrate with real-world data sources before making interventional claims. How was this model calibrated? 2. Compare with other non-ring based resource-limited vaccination strategies. For instance, during COVID-19: some govts delayed 2nd dose of the COVID-19 vaccine to prioritize first doses; and prioritized high-risk age groups when the supply was limited, such as [1]. Is risk-based ring vaccination better than these methods? Maybe interesting to study in the next paper. 3. Does risk-based ring vaccination also generalize to other infections like COVID-19/Flu which spread at mass scale or is only good when infectious are more localized to smaller communities, like Ebola. Would be important to analyze and clarify this distinction. 4. Finally, a "somewhat" similar concept explored in 106 canada neighborhoods during COVID-19 alpha variant, as in [2]. With the alpha variant, most infections were with <18 yr olds but vaccines were not authorized yet. So, authorities vaccinated parents of children at greater risk from COVID-19 since vaccine was not authorized for children yet. Is this a form of risk-based ring vaccination? The idea is very intuitive, so i am curious to know if such risk-based rings have been explored previously. I would encourage the authors to think about some of these concerns, if they are selected to present at the workshop. I am also okay if the paper is accepted since it is a non-archival workshop and would make for good discussion. [1]: https://www.bmj.com/content/373/bmj.n1087 [2]: https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2788978
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Ql4CuaB3-D
KDD.org/2023/Workshop/epiDAMIK
2023
Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization
["Xueqiao Peng", "Jiaqi Xu", "Xi Chen", "Dinh Song An Nguyen", "Andrew Perrault"]
Non-pharmaceutical interventions (NPIs) play a critical role in the defense against emerging pathogens. Among these interventions, familiar measures such as travel bans, event cancellations, social distancing, curfews, and lockdowns have become integral components of our response strategy. Contact tracing is especially widely adopted. However, the optimization of contact tracing involves navigating various trade-offs, including the simultaneous goals of minimizing virus transmission and reducing costs. Reinforcement learning (RL) techniques provides a promising avenue to model intricate decision-making processes and optimize policies to achieve specific objectives, but even modern deep RL techniques struggle in the high dimensional partially observable problem setting presented by contact tracing. We propose a novel RL approach to optimize a multi-objective infectious disease control policy that combines supervised learning with RL, allowing us to capitalize on the strengths of both techniques. Through extensive experimentation and evaluation, we show that our optimized policy surpasses the performance of five benchmark policies.
["reinforcement", "npi optimization", "interventions", "contact", "npis", "critical role", "defense", "pathogens", "familiar measures"]
ABSTRACTNon-pharmaceutical interventions (NPIs) play a critical role in thedefense against emerging pathogens. Among these interventions,familiar measures such as travel bans, event cancellations, socialdistancing, curfews, and lockdowns have become integral compo-nents of our response strategy. Contact tracing is especially widelyadopted. However, the optimization of contact tracing involvesnavigating various trade-offs, including the simultaneous goals ofminimizing virus transmission and reducing costs. Reinforcementlearning (RL) techniques provides a promising avenue to model in-tricate decision-making processes and optimize policies to achievespecific objectives, but even modern deep RL techniques strug-gle in the high dimensional partially observable problem settingpresented by contact tracing. We propose a novel RL approach tooptimize a multi-objective infectious disease control policy thatcombines supervised learning with RL, allowing us to capitalize onthe strengths of both techniques. Through extensive experimenta-tion and evaluation, we show that our optimized policy surpassesthe performance of five benchmark policies.KEYWORDSreinforcement learning, machine learning, contact tracing, publichealthACM Reference Format:Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and AndrewPerrault. 2023. Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining and KnowledgeDiscovery, August 7, 2023, Long Beach, CA, USA. , 7 pages.1 INTRODUCTIONThe COVID-19 pandemic has highlighted the crucial role of non-pharmaceutical interventions (NPIs) in effectively managing thespread of infectious diseases. The implementation of NPIs requiresPermission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).careful consideration of multiple objectives, including the preven-tion of viral transmission and the reduction of costs associatedwith quarantine measures. Contact tracing has emerged as a widelyadopted policy within the realm of NPIs and has been extensivelystudied in the context of COVID-19 [7, 8, 11, 21].Nevertheless, optimizing NPIs remains a challenging open prob-lem in many settings for several reasons. First, the objective is in-herently multi-objective—intensified control efforts lead to highercosts. In addition, sensing actions, such as testing, may be includedin all but the earliest stages of an infectious disease crisis. Thesehave their own costs and constraints associated with them. Sec-ondly, inferring the probability that an individual is difficult forinfections that do substantial transmission asymptomatically, suchas SARS-CoV-2. This inference problem is perhaps surprisingly highdimensional, as we show it is dependent on the symptom statusand test results of all individuals in the same cluster due to thetransmission heterogeneity.Cluster Symptom StatusTest InformationCNNTestInformationInfectionProbabilityIndividual Symptom StatusQuarantineTestIndividual StateActionsSimulatorPPO LearningRewardFigure 1: Illustration of our approach. We combine a infec-tion probability decoder that uses supervised learning witha reinforcement learning-based policy.In this work, our goal is to develop a generic approach for cluster-level optimization of NPIs. To tackle this challenge, we proposea novel approach that integrates convolutional neural networks(CNN) and reinforcement learning (RL) model[ 5,20] (Fig. 1). TheCNN is used to solve the high dimensional infection inferenceproblem and uses a novel representation of the symptom and teststate of the entire cluster as input, allowing a single CNN to betrained for all cluster sizes. The RL agent takes the CNN output andother features as its state and selects an action for each individual(including quarantine and testing) and aims to maximize a multi-objective reward function. This reward function includes a penaltyepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultfor days where an individual is infectious but not isolated, a penaltyfor days where they are quarantined but not infectious, as well as acost for any control action that is taken (e.g., test cost). As a casestudy, we have developed a branching process-based SARS-CoV-2virus simulator, where we evaluate the effectiveness of our method.In this work, we focus on optimization only—in the longer term,we aim to use the results of optimization to automatically discoversimple, implementable policies.This paper makes the following contributions:•We propose a novel RL approach for finding optimal con-tact tracing policies. Our approach combines a supervisedlearning model with an RL model, leveraging the strengthsof both techniques to optimize the desired objectives. Theresulting agent can be trained and deployed simultaneouslyacross all cluster sizes.•We show the existence of a theoretically simple, yet optimal,threshold type policy for contact tracing in the setting whereno sensing actions are available. Running this policy requiressupervised learning only.•We develop a simple branching process-based model forSARS-CoV-2 and compare our policies with baselines. Weshow that we achieve better rewards across a range of ob-jective parameters.Related work. We identify two main thrusts of work that optimizecontact tracing and NPIs: network and branching process. Networkmodels represent connections between individuals as edges in apossibly dynamic contact graph [ 4,9,12,15,16]. These approachescan leverage network structure in their decisions but make thestrong assumption that the entire contact network is known. Theclosest existing approach to ours is RLGN [ 12], which formulatesthe problem as a sequential decision-making task within a tempo-ral graph process. These approaches often consider a fixed budgetof interventions rather than a multi-objective reward function. Incontrast, branching processes are used, resulting in a cluster-based,tree-structured view of contagion [ 10,13,17]. These approacheshave the advantage of aligning more closely with the informationavailable to public health decision-makers in many practical set-tings (but allow for less expressive policies). All of these modelsare agent-based in the sense that they model individuals ratherthan subpopulations—because contact tracing decisions depend onthe specific time that certain events happen for individuals (e.g.,exposure, symptoms), the additional detail that agent-based modelsprovide is valuable for modeling and optimization.2 BRANCHING PROCESS ENVIRONMENTWe take a branching process-based view of an infectious diseasecrisis (Fig. 2). We track two generations of potential individuals:the seed case and their contacts. We assume that interventionsbegin after a reporting and tracing delay. At that point, day tstart(tstart=3in Fig. 2), we observe the symptom history for each agentup to daytand must decide which action to take for each agent(e.g., quarantine, test). On day t, we observe the symptom state ofeach agent plus the results of any sensing actions (defined below)we have taken up to day tand must decide what action to take foreach agent on day t. The simulation proceeds for a fixed period oftime untilT.Close ContactsTime(days)Seed CaseInfectiousWithoutSymptomsWithSymptomsExposedQuarantinedIsolationFigure 2: An agent-based branching process model. The dia-gram depicts standard contact tracing for an example seedcase with six contacts.In Fig. 2, we present an application of contact tracing policy in thebranching process framework. The seed case remains infectious fortwo days without exhibiting symptoms, followed by one day withsymptoms, before entering isolation. In this example, all six contactswere exposed on the same day. Contacts 1 and 4 are infected andshow symptoms on day 2 and day 3, respectively. All contacts areasked for quarantine if their infection probability is higher thana threshold. Contact 3 and contact 5 serve quarantine on day 3.Contact 2 and contact 6 start quarantining on day 4.In an infectious disease crisis, we can use whatever data is avail-able to construct such a branching process model. Many of therequired components are distributions that are often estimated byepidemiologists in the early stages of an outbreak. We describedistributions we used to simulate SARS-CoV-2 and their sourcesin Tab. 1. Components that are not known can be filled in conser-vatively or sensitivity analysis can be performed. In some cases,distributional estimates can be shared across diseases—for exam-ple, POLYMOD [ 14] provides contact distributions for the US andWestern European settings for both droplet and physical contact.The superspreading dynamics of infection can be impactful becauseit is often that most transmission is driven by a small number ofseed cases, and this concentration can be exploited by control poli-cies [ 17]. Nevertheless, superspreading dynamics are often poorlyunderstood, especially early in a crisis and greater understandingwould benefit approaches such as this paper’s.We define the objective function as(−S1−α2×S2−α3×S3)/cluster_size (1)where•S1is the count of transmission days where an infected indi-vidual is not quarantined,•S2is the count of days where a quarantined individual is notinfected, and α2(which we assume is in [0,1]) is the weightfor this term,•S3is the sum of the action costs (e.g., test cost) and α3is theweight for this term, and•cluster_size normalizes the objectives to a score per indi-vidual.In summary, the objective function seeks to minimize the numberof transmission days (i.e., days where an individual is infectiousUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1: Parameters of the SARS-CoV-2 branching process modelParameter Assumed value Details and referencesIncubation timeLog-normal: Log mean 1.57days and log std 0.65 daysMean: 5.94 days. Bi et al. [2]Duration of infectious period7 days—2 days before and5 days after onset if symptomaticBi et al. [2]Probability that an infectedindividual shows symptoms0.8 Buitrago-Garcia et al. [3]Probability of symptomswithout infectiousness0.01 per day Perrault et al. [17]Probability of asymptomatic infection 0.2 Buitrago-Garcia et al. [3]Probability of highly transmissive 0.109 Perrault et al. [17]Infectiousness multiplier forhighly transmissive individuals24.4 Perrault et al. [17]Test parametersTP = 0.86, FP = 0.66TN = 0.14, FN = 0.34Besutti et al. [1]DelaysObservation Delay = 3 daysTest Result Delay = 1 dayAssumedbut not quarantined), minimize the number of days of non-effectivequarantine, and minimize the cost associated with actions.We consider two action types. Quarantine-type actions reduce thenumber of transmission days for an agent. The simplest quarantine-type action causes an agent to not produce a transmission daywith probability 1 and incurs no additional cost. A more complexquarantine-type action may work probabilistically (because an indi-vidual may not choose to quarantine if directed), incur an additionalcost (e.g., the cost of checking in with that individual by phone), ormay be coupled with a sensing action (see below). Quarantine-typeactions are that they contribute to S2if the individual quarantinesand is not infected.Sensing-type actions do not directly affect the number of trans-mission days directly. Instead, they reveal information about anindividual’s infectious state according to a probability distribution.For example, if someone has had known exposure to someone in-fected, but he/she doesn’t show the symptoms. With antigen tests,we can know whether this person is infected or not. Actions cancombine both sensing and quarantine, e.g., an action that performsan antigen test and then quarantines if the result is positive.3 APPROACHWe show that the optimization problem from the previous sectioncan be formulated as a partially observable Markov decision pro-cess (POMDP). However, solving this POMDP directly is wildlyintractable. Some hope arrives from the result that, under a sim-plified model that contains only sensing-type actions, the POMDPcan be solved optimally if the probability that an individual is in-fectious can be estimated—itself a challenging problem due to thehigh dimensional observation space.Motivated by this conclusion, we formulate our solution ap-proach: we use a convolutional neural network (CNN) to estimatethe probability of infectiousness for each individual in a cluster,and this output, along with cluster-wide statistics, serves as thestate for the RL agent.3.1 POMDP FormulationWe define a POMDP [ 6] as⟨S,A,R,P, Ω,O,γ,S 0⟩, whereSandArepresent the state and action spaces, respectively, R:S×A→Ris the reward function, P:S×A→ΔSis the transition function,Ωis the observation state, O:S×A→ΔΩis the observationprobabilities, γ∈[0,1]is the discount factor, and S0:ΔSis thedistribution of initial states.We briefly describe how to interpret the control problem ofthe previous section as a POMDP. We define the state space ascontaining all of the relevant information required to simulate thecluster, including whether the seed case is highly transmissive,whether each contact of a seed case will become infected, whetherthey will show symptoms and if so, on what day. This simulatordata cannot be observed directly—instead we must rely on receivingaction-dependent observations. We define the action space as theset of daily quarantine and sensing actions that are available foreach individual in the cluster. For instance, in our experiments, weconsider five actions: no quarantine and no test, quarantine andno test, test and no quarantine, test and quarantine, and test andquarantine only if positive. If we have Nindividuals in the cluster,we have an action space of size |A|N. For observations, we receivetwo types of information from each individual in each timestep:symptom information and test results. We receive test results onlywhen a sensing-type action is taken and these results are noisy(Tab. 1). Similarly, we always observe symptoms if they are present,but both infectiousness without symptoms and symptoms withoutinfectiousness are possible. The resulting observation space size is4N.In principle, solving the POMDP formulation results in the opti-mal control policy. In practice, exact solving is not possible due tothe high computational complexity of the best-known algorithms.A particular source of difficulty is the problem of calculating theposterior probability of infection for each individual given the ob-servations. A key challenge is that the variation in infectiousness ofthe seed case causes the posterior probability of infection for eachepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultindividual to depend on the observations for all other individuals.Intuitively, observing symptoms or positive test results for one indi-vidual makes it more likely that the seed case is highly transmissiveand thus more likely that each other individual is infected.3.2 Optimal Policy Without Sensing ActionsWe first consider a simplified POMDP where the only actions avail-able are a quarantine action and no quarantine action. We showthat, if the posterior probability of infection can be calculated ex-actly, the optimal policy has a threshold-type form: if the posteriorprobability of infection is above a threshold, we quarantine andotherwise do not. We show this initially for a costless quarantineaction with 100% efficiency as this is what we use in experiments(Thm. 1). We then generalize the result to any menu of non-sensingactions because the expected reward of each action can be exactlycalculated given the posterior probability of infection (Thm. 2). Weremark that these results provide additional context to the findingsof Perrault et al . [17] by defining the class of optimal risk-basedpolicies.Letpinfrepresent the posterior probability of infection for anindividual given the observations so far.Theorem 1. With a costless quarantine action that is always suc-cessful and a null action, the objective function of Eq. 1, the optimalpolicy is to quarantine if pinf>α21+α2and take the null action other-wise.Proof. Because we have access to the exact posterior probabilityof infection, we can calculate the expected objective value for eachaction exactly:E[r]=−α2·(1−pinf) if quarantined−pinf if not quarantined.(2)We can then show that if pinf>α21+α2, the quarantine action hashigher expected reward. □We can use the above proof technique to derive the optimal policyfor any menu of non-sensing actions. A useful generalization iswhen the quarantine action has a cost and a failure rate.Theorem 2. With a quarantine action with success rate 0≤β≤1and cost 1 and a null action, the optimal policy is to quarantine ifpinf>α2·β+α3(1+α2)·βand otherwise do not.These results highlight the importance of the posterior probabil-ity of infection. We next dedicate our attention to producing usefulestimates of pinf.3.3 Supervised LearningWe could use RL directly to solve the POMDP using the observationinformation as the state. Indeed, we show that this is somewhateffective if we leverage the state representation we develop in thenext section. However, as we know the unobserved infectious statefor each agent in simulation, we hypothesize that using a supervisedlearning model to predict pinfand using this as input to the RLalgorithm will lead to better objective values compared to pureRL (and in the experiments, we see that the improvement is oftensubstantial). Another option for estimating pinfwould be to use analgorithm for approximate probabilistic inference such as Markovchain Monte Carlo, but doing so is challenging due to the highdimensional discrete observation space where most observationshave zero probability for a given state of infectiousness.A key question for applying supervised learning is how to repre-sent the observation space. We have two desiderata. First, we wouldlike the representation to not vary with cluster size. We can alsoachieve this property in the RL agent, resulting in an agent that si-multaneously be deployed across all cluster sizes, which makes bothtraining and deployment simpler. Second, there is an advantage tousing a representation that inherently accounts for the symmetriesthat arise due to the ordering of individuals, i.e., if we permute theorder of individuals in an observation, it should not affect pinfforeach individual.After testing several representations that satisfy these properties,we arrive at the 7×Tmatrix shown in Fig. 3, where Tis the sim-ulation length (in our experiments, T=30). This is an egocentricrepresentation of the observation—it is from the perspective of aparticular contact and contains all information gathered so far. Wetrain the supervised learning model fto produce output dimension[0,1]T, i.e., for every day of the simulation, what is the probabil-ity that the agent will be infectious given the observation usingsimulation outputs where the infectiousness of each individual isprovided.The representation contains the following information. The firstrow is 1 for each day after (inclusive) that the individual showssymptoms. The second row is a binary indicator of whether thisday is in the future (1 if yes). The third row is a count of the numberof individuals in the cluster that have shown symptoms up to (in-clusive) day t. The fourth row is the total number of contacts in thecluster minus 1 (constant across time). The fifth row is t. The sixthrow is 1 if a test was conducted for this individual, and the sixthrow represents the results of that test (with a one-day delay). In row2, 0s are used to indicate that observation was made by this dayand 1s represent the future. In row 6 and 7, 0s are used to representthe future (no test was ordered and no results were received).We will show that this representation can achieve an AUC of0.95 to predict infectiousness for our branching process model ifan appropriate architecture is selected.0111...0001...3333...9999...0123...0110...0010...Symptoms shown by day t?Total symptom count in clusterCluster Size - 1tTest on day t?Day t-1 test positive?0 for past and present, 1 for futureFigure 3: The observation representation used for supervisedlearning, shown on a cluster of size 10 after observing theoutcome of day 2.Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA2D ConvolutionLinear Layer0.080.170.250.60.40.5011001000101010001Observed StateInfection Probabilitypinf for past three dayspinf for next three daysSymptom indicator for past three daysTest indicator for past three daysTest results for last three daysCluster SizeCNNNumber of tests run across cluster in past three days7*30Input MatrixFigure 4: The supervised learning (CNN) output is used asinput to the RL state which prioritizes immediately relevantinformation.3.4 Reinforcement LearningTo make RL effective, we develop a compact state representationthat includes supervised learning outputs. As with supervised learn-ing, we want the representation to have the same size for all clustersand to naturally encode permutation invariance. The representa-tion we use is a 7×3matrix shown in Fig. 4. As with the suprvisedlearning representation, it is egocentric and time-specific.The first and second rows represent the pinfoutputs from super-vised learning for the last three days and next three days, respec-tively. The third row indicates whether the individual exhibitedsymptoms for each day in the past three days. The fourth row is anindicator for if this individual was tested for each of the past threedays. The fifth row denotes the test results with a one day delay.The sixth row is the cluster size. The last row indicates the numberof tests conducted in the cluster in the past three days.Training the RL algorithm is straightforward. First, we train thesupervised learning predictor from data collected from the simula-tor. In our experiments, we use a fixed but stochastic control policyto collect this data. This has the advantage that a single supervisedlearning training run can serve as input to an arbitrary number ofRL training. If the optimal policies are dramatically different thanthe data collection policy, an addition run of supervised learningtraining can be performed with the current RL policy to increaseits accuracy.Once the supervised learning predictor is trained, we train RLwith Proximal Policy Optimization (PPO) [ 19]. In our experiments,we use six different policy initializations, train each for 800000environment interactions and pick the best based on 100 evaluationruns. All training is performed on a single core, using Intel [email protected] with 8GB of RAM, and a single RL training run takes 20minutes.4 EXPERIMENTSWe compare different control policies in the branching processenvironment we construct for SARS-CoV-2. We consider a set offive control actions for each individual for each day: null action,quarantine, test but don’t quarantine, quarantine but don’t test, andtest and quarantine only if results are positive. We assume thatthere is no failure rate for actions, and all actions that include a testcost 1 and others are costless. For α2, we use small values of 0.01and 0.02as typical SARS-CoV-2 contact tracing policies accept alarge number of quarantine days for non-infectious individuals. Forα3, we use values of 0.001,0.005,0.01,0.02,0.03and 0.2. We samplecluster size from a uniform distribution on (2, 40). The model codeis available online (https://github.com/XueqiaoPeng/CovidRL).4.1 Supervised Learning ModelWe experiment with a variety of supervised learning model archi-tectures (Tab. 2) to find one that achieves a high AUC across clustersizes. We find that CNNs are generally most effective and comparedifferent kernels and layer structures. In single layer architectures,we find that larger 2D convolutions tend to achieve higher AUC.We then found that a single convolution layer followed by a linearlayer performs just as well as deeper architectures—this setup of a(5, 2) 2D convolution followed by a linear layer is what we use inthe experiments below.Table 2: We find that two-layer architectures using a 2D con-volution followed by a linear layer achieve performance onpar with larger models.Cluster size = 4 8 16 321 LayerConv1d (5,2) 0.798 0.807 0.823 0.830Conv1d (5,3) 0.814 0.830 0.835 0.839Conv2d (5,2) 0.800 0.814 0.827 0.830Conv2d (5,3) 0.832 0.820 0.838 0.840Conv2d (5,4) 0.858 0.849 0.843 0.859Conv2d (5,5) 0.864 0.895 0.893 0.8932 LayerConv1d (5,2)0.824 0.830 0.833 0.840Conv1d (1,2)Conv2d (5,3)0.883 0.903 0.898 0.897Conv2d (1,3)Conv2d( 5,2)0.955 0.960 0.947 0.961Linear LayerConv2d (5,3)0.951 0.960 0.940 0.964Linear Layer3 LayerConv1d (5,3)0.958 0.957 0.950 0.961 Conv1d (1,3)Linear Layer4 LayerConv1d (4,3)0.958 0.958 0.953 0.965Conv1d (2,3)Conv1d (1,3)Linear Layer4.2 Benchmark PoliciesWe compare the RLSL approach we propose to several baselines.•Threshold is the threshold-type policy suggested in Sec. 3.2.It does not use test actions. This policy turns out to be highlyconservative and results in long quarantine duration for allcontacts for the tested α2values.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew PerraultTable 3: RLSL achieves higher objective values (higher is better) than baselines across all tested α2andα3.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL (Ours) −3.77±0.25 −10 .27±0.15 −17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32 −25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22Threshold−21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20 −21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32Symptom-BasedQuarantine−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.9414 DaysQuarantine−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00No Quarantine−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38Table 4:S1,S2andS3per individual compared across different cluster sizes (lower is better), using α2=0.01andα3=0.01. Evenrelatively conservative strategies such as 14-day quarantine from exposure fail to isolate some infections in our simulation.RLSL can benefit substantially from the additional information available in large clusters resulting in strong performance withlow test costs.Cluster size = 4 Cluster size = 8 Cluster size = 16 Cluster size = 32S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3RLSL 0.064±0.008 6.808±0.184 10.144±0.052 0.077±0.012 7.552±0.099 11.825±0.056 0.075±0.011 10.033±0.127 11.253±0.087 0.054±0.007 8.259±0.090 10.808±0.134Threshold 0.078±0.013 16.012±0.211 - 0.063±0.013 17.656±0.198 - 0.05±0.008 19.681±0.173 - 0.016±0.003 20.701±0.319 -Symptom-BasedQuarantine1.418±0.199 0.236±0.029 - 1.207±0.187 0.239±0.014 - 1.196±0.052 0.232±0.017 - 1.072±0.146 0.261±0.042 -14-dayQuarantine1.042±0.072 2.469±0.113 - 0.965±0.082 2.440±0.144 - 0.973±0.114 2.291±0.125 - 0.929±0.107 2.004±0.155 -No Quarantine 2.361±0.195 - - 2.597±0.282 - - 2.075±0.203 - - 1.856±0.173 - -Table 5: In cases where test costs are higher, RLSL produces polices that test too often, resulting in lower performance thanRLSL models with only quarantine actions—we discuss potential fixes.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL −3.77±0.25 −10 .27±0.15−17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32−25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22RLSL (Daily Test) −4.30±0.42−13.15±0.15−24.46±0.17−45.62±1.27−74.68±0.2−737.78±3.33−12.81±0.55−23.72±0.47−27.25±0.58−50.50±0.11−75.88±0.26−739.98±1.516RLSL (No Test) −34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±.39−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13RL Only−14.64±0.79−20.32±0.83−34.02±0.70−46.10±1.14−53.22±1.01−84.35±1.04−15.36±0.76−25.66±0.56−39.80±0.39−63.07±0.81−70.56±0.827−162.4±2.36Threshold (SL Only) −21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20−21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32•Symptom-Based Quarantine quarantines if an individualexhibits symptoms on the day before the observed day andotherwise does not.•14-Day Quarantine quarantines individuals from the initialday they exhibit symptoms until either 14 days have passedor until they no longer exhibit symptoms, whichever is later.No test action is included.•No Quarantine always performs the null action.4.3 AnalysisOur experimental results report the average objective value andstandard error taken over 10 random clusters (Tab. 3). We find thatRLSL and Threshold acheive better performance than baselines inall cases. However, our current methods for RLSL struggle relativeto Threshold when tests are expensive. Our experimental resultscould be broadened by including more αvalues and more analysis asto where the RLSL policies gain their advantage (but see discussionof Tab. 5 below for some insights).Focusing on the setting of α1=0.01andα2=0.01, we reportobjective values broken out by component and by cluster size asmeasured per individual (Tab. 4). Here we can get an intuitive graspof what is happening in the different policies. Threshold aggres-sively quarantines, resulting in S2=16–20, i.e., 16–20 days ofquarantine without infection per contact, for the tested αvalues.This is able to drive S1to a low value, resulting in an average objec-tive value of−21.79. Recall that S1is much more highly weighted(100 times) higher than S2in this setting. Symptom-based and 14-day quarantine reduce S2by a factor of 8 to 100, but this causes S1to be roughly 150 to 200 times higher. By leveraging tests, RLSLcan reduceS2by a factor of 2–3 and S1by a factor of 0.8–3.5.In the ablation study (Tab. 5), we gain a more detailed view intothe operation of the RLSL policy. We see that the introduction of theSL outputs to the RL state results in better performance in all testedscenarios compared to RL Only, which uses the state representationof Fig. 4 without the first two rows.We can observe limitations of the supervised infectiousness pre-diction model in Tab. 4, where the S2cost does not decrease ascluster size increases—from Thm. 1, we can conclude that if pinfis correct, the ratio of S1toS2should not depend on cluster sizefor Threshold. There are several possible causes of this issue. First,the SL model outputs might be miscalibrated, as is often the casefor neural networks trained on highly imbalanced data. This issuecould be fixed with post-hoc calibration such as Platt scaling [ 18].In this instance, a more sophisticated calibration could be employedwith separate calibration parameters per cluster size, if necessary.Second, it may be the case that the SL model outputs are wrong forreasons other than calibration. For example, it may receive insuffi-cient relevant training data as it is trained on data produced from arandom policy and not Threshold or RLSL. It is also possible thatwe performed insufficient architecture search.We also see that RLSL (No Test) often performs better than RLSLas test costs increase. This suggests that RLSL is not finding a trueUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAoptimal policy. This could likely be address by using a wider rangeof initialization values for RLSL—for example, initializing someseeds to policies that test very little (the initialization we use forRLSL and RL Only tests heavily). This observation has a silverlining: RL (No Test) can achieve much stronger performance thanbaselines even without tests. This implies that RL (No Test) is ableto correct for the errors in Threshold to find a policy closer to whatis suggested by Thm. 1.5 DISCUSSION AND FUTURE WORKThis work aims to develop a generic multi-objective optimizationapproach for cluster-level optimization of NPIs. We formulate thisproblem for RL in a branching process environment. We presentinitial results that demonstrate the potential of our approach—in abranching process model of SARS-CoV-2, we can achieve substan-tially higher objective values than baseline policies. The resultingpolicies can be applied across all cluster sizes and do not take muchtime to train on consumer hardware. The policies we propose areable to heavily exploit superspreading dynamics.Our vision for an infectious disease crisis is that a canonical prob-abilistic model of the disease is constructed and updated throughoutthe crisis. The model can be constructed from estimates of key dis-ease parameters that are made from various sources throughout acrisis and can reflect uncertainty in these estimates. We advocatethat superspreading dynamics be given substantial attention inthese early stages due to the substantial influence on interventionsthat we find it can have. Using this canonical model, a branchingprocess environment can be constructed and optimized against aswe propose in this paper. We do not consider uncertainty in theparameters of this model, but it is possible to do so with existingtechniques and leads to different RL algorithmic choices dependingon the form of the uncertainty and the desired objective.A key disadvantage of our approach as presented is the com-plexity of the resulting policies. For instance, to execute our RLSLpolicy requires training and drawing outputs from two neural net-works. In contrast, policies that were employed in the SARS-CoV-2pandemic consisted of short lists of rules. We believe that this isnot an inherent weakness of our approach—we can leverage inter-pretable ML and RL techniques to “distill” the RLSL policies into,say, low-depth decision trees, allowing them to be applied at scalewith low logistical cost. There will be some decrease in quality, butwe suspect still substantial advantage over baselines.An area for future study is cost and benefit of taking a cluster-rather than individual-level view of policy application. This imposesadditional logistical costs and the benefit is dependent on the degreeof cluster-level transmission heterogeneity that is present. Thistrade-off is not well understood and is a critical area for futurework.
2Bqw6JYNKG
Interesting problem.. parts of approach unclear
2: Marginally below acceptance threshold
The paper aims to learn contact tracing policies by bridging SL and RL. In implementation, a CNN is used to estimate the probability of infectiousness for each individual in a cluster, and this output, along with cluster-wide statistics, serves as the state for the RL agent which learns a cluster-level lockdown policy. Briefly, first data is sampled from the simulator under arbitrary control policies to predict infectiousness of agents. Once predictor is trained, how the outputs are used to define state of the RL agent to learn cluster-level policies. The cluster sizes are from 8 to 32. The policy is learned over a space of 5 discrete actions over an objective to minimize (number of transmission days + non-effective quarantine + costs). The problem of interest is very impact and the idea of using RL to learn NPI policies is exciting. However, the current formulation and assumptions seems a bit unrealistic and results are also not very encouraging. I would suggest authors to revisit the experiment design and then resubmit a manuscript. Some comments and questions to think about: 1. The SL training setup seems highly unrealistic since it uses ground truth not available in the real world. How can the exact infection probability of an individual estimated for ground truth? How will this approach generalize? Even results in Table 5 mean that " SL outputs could be mis-calibrated". Also, it is less intuitive to learn input state parameters for an RL agent when data can be approximated from the the environment (since was used for SL ground-truth)? 2. Effect of decisions on cluster sizes will depend on their relative scale w.r.t the size of the total population. if clusters are as small as 8, 16 or 32 people -- it will be very tough to observe distinction between individual people and clusters. To make claims for real policy decision making: clusters should at-least be a census block [or county] and the simulation should analyze how these variables change with scale and mobility across clusters. What is the size of the total population considered? This was not evident from experiments. I think these are sensitive problems with far-reaching implications. More research needs to be done before claims are put out into the world.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Ql4CuaB3-D
KDD.org/2023/Workshop/epiDAMIK
2023
Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization
["Xueqiao Peng", "Jiaqi Xu", "Xi Chen", "Dinh Song An Nguyen", "Andrew Perrault"]
Non-pharmaceutical interventions (NPIs) play a critical role in the defense against emerging pathogens. Among these interventions, familiar measures such as travel bans, event cancellations, social distancing, curfews, and lockdowns have become integral components of our response strategy. Contact tracing is especially widely adopted. However, the optimization of contact tracing involves navigating various trade-offs, including the simultaneous goals of minimizing virus transmission and reducing costs. Reinforcement learning (RL) techniques provides a promising avenue to model intricate decision-making processes and optimize policies to achieve specific objectives, but even modern deep RL techniques struggle in the high dimensional partially observable problem setting presented by contact tracing. We propose a novel RL approach to optimize a multi-objective infectious disease control policy that combines supervised learning with RL, allowing us to capitalize on the strengths of both techniques. Through extensive experimentation and evaluation, we show that our optimized policy surpasses the performance of five benchmark policies.
["reinforcement", "npi optimization", "interventions", "contact", "npis", "critical role", "defense", "pathogens", "familiar measures"]
ABSTRACTNon-pharmaceutical interventions (NPIs) play a critical role in thedefense against emerging pathogens. Among these interventions,familiar measures such as travel bans, event cancellations, socialdistancing, curfews, and lockdowns have become integral compo-nents of our response strategy. Contact tracing is especially widelyadopted. However, the optimization of contact tracing involvesnavigating various trade-offs, including the simultaneous goals ofminimizing virus transmission and reducing costs. Reinforcementlearning (RL) techniques provides a promising avenue to model in-tricate decision-making processes and optimize policies to achievespecific objectives, but even modern deep RL techniques strug-gle in the high dimensional partially observable problem settingpresented by contact tracing. We propose a novel RL approach tooptimize a multi-objective infectious disease control policy thatcombines supervised learning with RL, allowing us to capitalize onthe strengths of both techniques. Through extensive experimenta-tion and evaluation, we show that our optimized policy surpassesthe performance of five benchmark policies.KEYWORDSreinforcement learning, machine learning, contact tracing, publichealthACM Reference Format:Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and AndrewPerrault. 2023. Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining and KnowledgeDiscovery, August 7, 2023, Long Beach, CA, USA. , 7 pages.1 INTRODUCTIONThe COVID-19 pandemic has highlighted the crucial role of non-pharmaceutical interventions (NPIs) in effectively managing thespread of infectious diseases. The implementation of NPIs requiresPermission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).careful consideration of multiple objectives, including the preven-tion of viral transmission and the reduction of costs associatedwith quarantine measures. Contact tracing has emerged as a widelyadopted policy within the realm of NPIs and has been extensivelystudied in the context of COVID-19 [7, 8, 11, 21].Nevertheless, optimizing NPIs remains a challenging open prob-lem in many settings for several reasons. First, the objective is in-herently multi-objective—intensified control efforts lead to highercosts. In addition, sensing actions, such as testing, may be includedin all but the earliest stages of an infectious disease crisis. Thesehave their own costs and constraints associated with them. Sec-ondly, inferring the probability that an individual is difficult forinfections that do substantial transmission asymptomatically, suchas SARS-CoV-2. This inference problem is perhaps surprisingly highdimensional, as we show it is dependent on the symptom statusand test results of all individuals in the same cluster due to thetransmission heterogeneity.Cluster Symptom StatusTest InformationCNNTestInformationInfectionProbabilityIndividual Symptom StatusQuarantineTestIndividual StateActionsSimulatorPPO LearningRewardFigure 1: Illustration of our approach. We combine a infec-tion probability decoder that uses supervised learning witha reinforcement learning-based policy.In this work, our goal is to develop a generic approach for cluster-level optimization of NPIs. To tackle this challenge, we proposea novel approach that integrates convolutional neural networks(CNN) and reinforcement learning (RL) model[ 5,20] (Fig. 1). TheCNN is used to solve the high dimensional infection inferenceproblem and uses a novel representation of the symptom and teststate of the entire cluster as input, allowing a single CNN to betrained for all cluster sizes. The RL agent takes the CNN output andother features as its state and selects an action for each individual(including quarantine and testing) and aims to maximize a multi-objective reward function. This reward function includes a penaltyepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultfor days where an individual is infectious but not isolated, a penaltyfor days where they are quarantined but not infectious, as well as acost for any control action that is taken (e.g., test cost). As a casestudy, we have developed a branching process-based SARS-CoV-2virus simulator, where we evaluate the effectiveness of our method.In this work, we focus on optimization only—in the longer term,we aim to use the results of optimization to automatically discoversimple, implementable policies.This paper makes the following contributions:•We propose a novel RL approach for finding optimal con-tact tracing policies. Our approach combines a supervisedlearning model with an RL model, leveraging the strengthsof both techniques to optimize the desired objectives. Theresulting agent can be trained and deployed simultaneouslyacross all cluster sizes.•We show the existence of a theoretically simple, yet optimal,threshold type policy for contact tracing in the setting whereno sensing actions are available. Running this policy requiressupervised learning only.•We develop a simple branching process-based model forSARS-CoV-2 and compare our policies with baselines. Weshow that we achieve better rewards across a range of ob-jective parameters.Related work. We identify two main thrusts of work that optimizecontact tracing and NPIs: network and branching process. Networkmodels represent connections between individuals as edges in apossibly dynamic contact graph [ 4,9,12,15,16]. These approachescan leverage network structure in their decisions but make thestrong assumption that the entire contact network is known. Theclosest existing approach to ours is RLGN [ 12], which formulatesthe problem as a sequential decision-making task within a tempo-ral graph process. These approaches often consider a fixed budgetof interventions rather than a multi-objective reward function. Incontrast, branching processes are used, resulting in a cluster-based,tree-structured view of contagion [ 10,13,17]. These approacheshave the advantage of aligning more closely with the informationavailable to public health decision-makers in many practical set-tings (but allow for less expressive policies). All of these modelsare agent-based in the sense that they model individuals ratherthan subpopulations—because contact tracing decisions depend onthe specific time that certain events happen for individuals (e.g.,exposure, symptoms), the additional detail that agent-based modelsprovide is valuable for modeling and optimization.2 BRANCHING PROCESS ENVIRONMENTWe take a branching process-based view of an infectious diseasecrisis (Fig. 2). We track two generations of potential individuals:the seed case and their contacts. We assume that interventionsbegin after a reporting and tracing delay. At that point, day tstart(tstart=3in Fig. 2), we observe the symptom history for each agentup to daytand must decide which action to take for each agent(e.g., quarantine, test). On day t, we observe the symptom state ofeach agent plus the results of any sensing actions (defined below)we have taken up to day tand must decide what action to take foreach agent on day t. The simulation proceeds for a fixed period oftime untilT.Close ContactsTime(days)Seed CaseInfectiousWithoutSymptomsWithSymptomsExposedQuarantinedIsolationFigure 2: An agent-based branching process model. The dia-gram depicts standard contact tracing for an example seedcase with six contacts.In Fig. 2, we present an application of contact tracing policy in thebranching process framework. The seed case remains infectious fortwo days without exhibiting symptoms, followed by one day withsymptoms, before entering isolation. In this example, all six contactswere exposed on the same day. Contacts 1 and 4 are infected andshow symptoms on day 2 and day 3, respectively. All contacts areasked for quarantine if their infection probability is higher thana threshold. Contact 3 and contact 5 serve quarantine on day 3.Contact 2 and contact 6 start quarantining on day 4.In an infectious disease crisis, we can use whatever data is avail-able to construct such a branching process model. Many of therequired components are distributions that are often estimated byepidemiologists in the early stages of an outbreak. We describedistributions we used to simulate SARS-CoV-2 and their sourcesin Tab. 1. Components that are not known can be filled in conser-vatively or sensitivity analysis can be performed. In some cases,distributional estimates can be shared across diseases—for exam-ple, POLYMOD [ 14] provides contact distributions for the US andWestern European settings for both droplet and physical contact.The superspreading dynamics of infection can be impactful becauseit is often that most transmission is driven by a small number ofseed cases, and this concentration can be exploited by control poli-cies [ 17]. Nevertheless, superspreading dynamics are often poorlyunderstood, especially early in a crisis and greater understandingwould benefit approaches such as this paper’s.We define the objective function as(−S1−α2×S2−α3×S3)/cluster_size (1)where•S1is the count of transmission days where an infected indi-vidual is not quarantined,•S2is the count of days where a quarantined individual is notinfected, and α2(which we assume is in [0,1]) is the weightfor this term,•S3is the sum of the action costs (e.g., test cost) and α3is theweight for this term, and•cluster_size normalizes the objectives to a score per indi-vidual.In summary, the objective function seeks to minimize the numberof transmission days (i.e., days where an individual is infectiousUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1: Parameters of the SARS-CoV-2 branching process modelParameter Assumed value Details and referencesIncubation timeLog-normal: Log mean 1.57days and log std 0.65 daysMean: 5.94 days. Bi et al. [2]Duration of infectious period7 days—2 days before and5 days after onset if symptomaticBi et al. [2]Probability that an infectedindividual shows symptoms0.8 Buitrago-Garcia et al. [3]Probability of symptomswithout infectiousness0.01 per day Perrault et al. [17]Probability of asymptomatic infection 0.2 Buitrago-Garcia et al. [3]Probability of highly transmissive 0.109 Perrault et al. [17]Infectiousness multiplier forhighly transmissive individuals24.4 Perrault et al. [17]Test parametersTP = 0.86, FP = 0.66TN = 0.14, FN = 0.34Besutti et al. [1]DelaysObservation Delay = 3 daysTest Result Delay = 1 dayAssumedbut not quarantined), minimize the number of days of non-effectivequarantine, and minimize the cost associated with actions.We consider two action types. Quarantine-type actions reduce thenumber of transmission days for an agent. The simplest quarantine-type action causes an agent to not produce a transmission daywith probability 1 and incurs no additional cost. A more complexquarantine-type action may work probabilistically (because an indi-vidual may not choose to quarantine if directed), incur an additionalcost (e.g., the cost of checking in with that individual by phone), ormay be coupled with a sensing action (see below). Quarantine-typeactions are that they contribute to S2if the individual quarantinesand is not infected.Sensing-type actions do not directly affect the number of trans-mission days directly. Instead, they reveal information about anindividual’s infectious state according to a probability distribution.For example, if someone has had known exposure to someone in-fected, but he/she doesn’t show the symptoms. With antigen tests,we can know whether this person is infected or not. Actions cancombine both sensing and quarantine, e.g., an action that performsan antigen test and then quarantines if the result is positive.3 APPROACHWe show that the optimization problem from the previous sectioncan be formulated as a partially observable Markov decision pro-cess (POMDP). However, solving this POMDP directly is wildlyintractable. Some hope arrives from the result that, under a sim-plified model that contains only sensing-type actions, the POMDPcan be solved optimally if the probability that an individual is in-fectious can be estimated—itself a challenging problem due to thehigh dimensional observation space.Motivated by this conclusion, we formulate our solution ap-proach: we use a convolutional neural network (CNN) to estimatethe probability of infectiousness for each individual in a cluster,and this output, along with cluster-wide statistics, serves as thestate for the RL agent.3.1 POMDP FormulationWe define a POMDP [ 6] as⟨S,A,R,P, Ω,O,γ,S 0⟩, whereSandArepresent the state and action spaces, respectively, R:S×A→Ris the reward function, P:S×A→ΔSis the transition function,Ωis the observation state, O:S×A→ΔΩis the observationprobabilities, γ∈[0,1]is the discount factor, and S0:ΔSis thedistribution of initial states.We briefly describe how to interpret the control problem ofthe previous section as a POMDP. We define the state space ascontaining all of the relevant information required to simulate thecluster, including whether the seed case is highly transmissive,whether each contact of a seed case will become infected, whetherthey will show symptoms and if so, on what day. This simulatordata cannot be observed directly—instead we must rely on receivingaction-dependent observations. We define the action space as theset of daily quarantine and sensing actions that are available foreach individual in the cluster. For instance, in our experiments, weconsider five actions: no quarantine and no test, quarantine andno test, test and no quarantine, test and quarantine, and test andquarantine only if positive. If we have Nindividuals in the cluster,we have an action space of size |A|N. For observations, we receivetwo types of information from each individual in each timestep:symptom information and test results. We receive test results onlywhen a sensing-type action is taken and these results are noisy(Tab. 1). Similarly, we always observe symptoms if they are present,but both infectiousness without symptoms and symptoms withoutinfectiousness are possible. The resulting observation space size is4N.In principle, solving the POMDP formulation results in the opti-mal control policy. In practice, exact solving is not possible due tothe high computational complexity of the best-known algorithms.A particular source of difficulty is the problem of calculating theposterior probability of infection for each individual given the ob-servations. A key challenge is that the variation in infectiousness ofthe seed case causes the posterior probability of infection for eachepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultindividual to depend on the observations for all other individuals.Intuitively, observing symptoms or positive test results for one indi-vidual makes it more likely that the seed case is highly transmissiveand thus more likely that each other individual is infected.3.2 Optimal Policy Without Sensing ActionsWe first consider a simplified POMDP where the only actions avail-able are a quarantine action and no quarantine action. We showthat, if the posterior probability of infection can be calculated ex-actly, the optimal policy has a threshold-type form: if the posteriorprobability of infection is above a threshold, we quarantine andotherwise do not. We show this initially for a costless quarantineaction with 100% efficiency as this is what we use in experiments(Thm. 1). We then generalize the result to any menu of non-sensingactions because the expected reward of each action can be exactlycalculated given the posterior probability of infection (Thm. 2). Weremark that these results provide additional context to the findingsof Perrault et al . [17] by defining the class of optimal risk-basedpolicies.Letpinfrepresent the posterior probability of infection for anindividual given the observations so far.Theorem 1. With a costless quarantine action that is always suc-cessful and a null action, the objective function of Eq. 1, the optimalpolicy is to quarantine if pinf>α21+α2and take the null action other-wise.Proof. Because we have access to the exact posterior probabilityof infection, we can calculate the expected objective value for eachaction exactly:E[r]=−α2·(1−pinf) if quarantined−pinf if not quarantined.(2)We can then show that if pinf>α21+α2, the quarantine action hashigher expected reward. □We can use the above proof technique to derive the optimal policyfor any menu of non-sensing actions. A useful generalization iswhen the quarantine action has a cost and a failure rate.Theorem 2. With a quarantine action with success rate 0≤β≤1and cost 1 and a null action, the optimal policy is to quarantine ifpinf>α2·β+α3(1+α2)·βand otherwise do not.These results highlight the importance of the posterior probabil-ity of infection. We next dedicate our attention to producing usefulestimates of pinf.3.3 Supervised LearningWe could use RL directly to solve the POMDP using the observationinformation as the state. Indeed, we show that this is somewhateffective if we leverage the state representation we develop in thenext section. However, as we know the unobserved infectious statefor each agent in simulation, we hypothesize that using a supervisedlearning model to predict pinfand using this as input to the RLalgorithm will lead to better objective values compared to pureRL (and in the experiments, we see that the improvement is oftensubstantial). Another option for estimating pinfwould be to use analgorithm for approximate probabilistic inference such as Markovchain Monte Carlo, but doing so is challenging due to the highdimensional discrete observation space where most observationshave zero probability for a given state of infectiousness.A key question for applying supervised learning is how to repre-sent the observation space. We have two desiderata. First, we wouldlike the representation to not vary with cluster size. We can alsoachieve this property in the RL agent, resulting in an agent that si-multaneously be deployed across all cluster sizes, which makes bothtraining and deployment simpler. Second, there is an advantage tousing a representation that inherently accounts for the symmetriesthat arise due to the ordering of individuals, i.e., if we permute theorder of individuals in an observation, it should not affect pinfforeach individual.After testing several representations that satisfy these properties,we arrive at the 7×Tmatrix shown in Fig. 3, where Tis the sim-ulation length (in our experiments, T=30). This is an egocentricrepresentation of the observation—it is from the perspective of aparticular contact and contains all information gathered so far. Wetrain the supervised learning model fto produce output dimension[0,1]T, i.e., for every day of the simulation, what is the probabil-ity that the agent will be infectious given the observation usingsimulation outputs where the infectiousness of each individual isprovided.The representation contains the following information. The firstrow is 1 for each day after (inclusive) that the individual showssymptoms. The second row is a binary indicator of whether thisday is in the future (1 if yes). The third row is a count of the numberof individuals in the cluster that have shown symptoms up to (in-clusive) day t. The fourth row is the total number of contacts in thecluster minus 1 (constant across time). The fifth row is t. The sixthrow is 1 if a test was conducted for this individual, and the sixthrow represents the results of that test (with a one-day delay). In row2, 0s are used to indicate that observation was made by this dayand 1s represent the future. In row 6 and 7, 0s are used to representthe future (no test was ordered and no results were received).We will show that this representation can achieve an AUC of0.95 to predict infectiousness for our branching process model ifan appropriate architecture is selected.0111...0001...3333...9999...0123...0110...0010...Symptoms shown by day t?Total symptom count in clusterCluster Size - 1tTest on day t?Day t-1 test positive?0 for past and present, 1 for futureFigure 3: The observation representation used for supervisedlearning, shown on a cluster of size 10 after observing theoutcome of day 2.Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA2D ConvolutionLinear Layer0.080.170.250.60.40.5011001000101010001Observed StateInfection Probabilitypinf for past three dayspinf for next three daysSymptom indicator for past three daysTest indicator for past three daysTest results for last three daysCluster SizeCNNNumber of tests run across cluster in past three days7*30Input MatrixFigure 4: The supervised learning (CNN) output is used asinput to the RL state which prioritizes immediately relevantinformation.3.4 Reinforcement LearningTo make RL effective, we develop a compact state representationthat includes supervised learning outputs. As with supervised learn-ing, we want the representation to have the same size for all clustersand to naturally encode permutation invariance. The representa-tion we use is a 7×3matrix shown in Fig. 4. As with the suprvisedlearning representation, it is egocentric and time-specific.The first and second rows represent the pinfoutputs from super-vised learning for the last three days and next three days, respec-tively. The third row indicates whether the individual exhibitedsymptoms for each day in the past three days. The fourth row is anindicator for if this individual was tested for each of the past threedays. The fifth row denotes the test results with a one day delay.The sixth row is the cluster size. The last row indicates the numberof tests conducted in the cluster in the past three days.Training the RL algorithm is straightforward. First, we train thesupervised learning predictor from data collected from the simula-tor. In our experiments, we use a fixed but stochastic control policyto collect this data. This has the advantage that a single supervisedlearning training run can serve as input to an arbitrary number ofRL training. If the optimal policies are dramatically different thanthe data collection policy, an addition run of supervised learningtraining can be performed with the current RL policy to increaseits accuracy.Once the supervised learning predictor is trained, we train RLwith Proximal Policy Optimization (PPO) [ 19]. In our experiments,we use six different policy initializations, train each for 800000environment interactions and pick the best based on 100 evaluationruns. All training is performed on a single core, using Intel [email protected] with 8GB of RAM, and a single RL training run takes 20minutes.4 EXPERIMENTSWe compare different control policies in the branching processenvironment we construct for SARS-CoV-2. We consider a set offive control actions for each individual for each day: null action,quarantine, test but don’t quarantine, quarantine but don’t test, andtest and quarantine only if results are positive. We assume thatthere is no failure rate for actions, and all actions that include a testcost 1 and others are costless. For α2, we use small values of 0.01and 0.02as typical SARS-CoV-2 contact tracing policies accept alarge number of quarantine days for non-infectious individuals. Forα3, we use values of 0.001,0.005,0.01,0.02,0.03and 0.2. We samplecluster size from a uniform distribution on (2, 40). The model codeis available online (https://github.com/XueqiaoPeng/CovidRL).4.1 Supervised Learning ModelWe experiment with a variety of supervised learning model archi-tectures (Tab. 2) to find one that achieves a high AUC across clustersizes. We find that CNNs are generally most effective and comparedifferent kernels and layer structures. In single layer architectures,we find that larger 2D convolutions tend to achieve higher AUC.We then found that a single convolution layer followed by a linearlayer performs just as well as deeper architectures—this setup of a(5, 2) 2D convolution followed by a linear layer is what we use inthe experiments below.Table 2: We find that two-layer architectures using a 2D con-volution followed by a linear layer achieve performance onpar with larger models.Cluster size = 4 8 16 321 LayerConv1d (5,2) 0.798 0.807 0.823 0.830Conv1d (5,3) 0.814 0.830 0.835 0.839Conv2d (5,2) 0.800 0.814 0.827 0.830Conv2d (5,3) 0.832 0.820 0.838 0.840Conv2d (5,4) 0.858 0.849 0.843 0.859Conv2d (5,5) 0.864 0.895 0.893 0.8932 LayerConv1d (5,2)0.824 0.830 0.833 0.840Conv1d (1,2)Conv2d (5,3)0.883 0.903 0.898 0.897Conv2d (1,3)Conv2d( 5,2)0.955 0.960 0.947 0.961Linear LayerConv2d (5,3)0.951 0.960 0.940 0.964Linear Layer3 LayerConv1d (5,3)0.958 0.957 0.950 0.961 Conv1d (1,3)Linear Layer4 LayerConv1d (4,3)0.958 0.958 0.953 0.965Conv1d (2,3)Conv1d (1,3)Linear Layer4.2 Benchmark PoliciesWe compare the RLSL approach we propose to several baselines.•Threshold is the threshold-type policy suggested in Sec. 3.2.It does not use test actions. This policy turns out to be highlyconservative and results in long quarantine duration for allcontacts for the tested α2values.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew PerraultTable 3: RLSL achieves higher objective values (higher is better) than baselines across all tested α2andα3.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL (Ours) −3.77±0.25 −10 .27±0.15 −17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32 −25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22Threshold−21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20 −21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32Symptom-BasedQuarantine−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.9414 DaysQuarantine−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00No Quarantine−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38Table 4:S1,S2andS3per individual compared across different cluster sizes (lower is better), using α2=0.01andα3=0.01. Evenrelatively conservative strategies such as 14-day quarantine from exposure fail to isolate some infections in our simulation.RLSL can benefit substantially from the additional information available in large clusters resulting in strong performance withlow test costs.Cluster size = 4 Cluster size = 8 Cluster size = 16 Cluster size = 32S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3RLSL 0.064±0.008 6.808±0.184 10.144±0.052 0.077±0.012 7.552±0.099 11.825±0.056 0.075±0.011 10.033±0.127 11.253±0.087 0.054±0.007 8.259±0.090 10.808±0.134Threshold 0.078±0.013 16.012±0.211 - 0.063±0.013 17.656±0.198 - 0.05±0.008 19.681±0.173 - 0.016±0.003 20.701±0.319 -Symptom-BasedQuarantine1.418±0.199 0.236±0.029 - 1.207±0.187 0.239±0.014 - 1.196±0.052 0.232±0.017 - 1.072±0.146 0.261±0.042 -14-dayQuarantine1.042±0.072 2.469±0.113 - 0.965±0.082 2.440±0.144 - 0.973±0.114 2.291±0.125 - 0.929±0.107 2.004±0.155 -No Quarantine 2.361±0.195 - - 2.597±0.282 - - 2.075±0.203 - - 1.856±0.173 - -Table 5: In cases where test costs are higher, RLSL produces polices that test too often, resulting in lower performance thanRLSL models with only quarantine actions—we discuss potential fixes.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL −3.77±0.25 −10 .27±0.15−17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32−25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22RLSL (Daily Test) −4.30±0.42−13.15±0.15−24.46±0.17−45.62±1.27−74.68±0.2−737.78±3.33−12.81±0.55−23.72±0.47−27.25±0.58−50.50±0.11−75.88±0.26−739.98±1.516RLSL (No Test) −34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±.39−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13RL Only−14.64±0.79−20.32±0.83−34.02±0.70−46.10±1.14−53.22±1.01−84.35±1.04−15.36±0.76−25.66±0.56−39.80±0.39−63.07±0.81−70.56±0.827−162.4±2.36Threshold (SL Only) −21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20−21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32•Symptom-Based Quarantine quarantines if an individualexhibits symptoms on the day before the observed day andotherwise does not.•14-Day Quarantine quarantines individuals from the initialday they exhibit symptoms until either 14 days have passedor until they no longer exhibit symptoms, whichever is later.No test action is included.•No Quarantine always performs the null action.4.3 AnalysisOur experimental results report the average objective value andstandard error taken over 10 random clusters (Tab. 3). We find thatRLSL and Threshold acheive better performance than baselines inall cases. However, our current methods for RLSL struggle relativeto Threshold when tests are expensive. Our experimental resultscould be broadened by including more αvalues and more analysis asto where the RLSL policies gain their advantage (but see discussionof Tab. 5 below for some insights).Focusing on the setting of α1=0.01andα2=0.01, we reportobjective values broken out by component and by cluster size asmeasured per individual (Tab. 4). Here we can get an intuitive graspof what is happening in the different policies. Threshold aggres-sively quarantines, resulting in S2=16–20, i.e., 16–20 days ofquarantine without infection per contact, for the tested αvalues.This is able to drive S1to a low value, resulting in an average objec-tive value of−21.79. Recall that S1is much more highly weighted(100 times) higher than S2in this setting. Symptom-based and 14-day quarantine reduce S2by a factor of 8 to 100, but this causes S1to be roughly 150 to 200 times higher. By leveraging tests, RLSLcan reduceS2by a factor of 2–3 and S1by a factor of 0.8–3.5.In the ablation study (Tab. 5), we gain a more detailed view intothe operation of the RLSL policy. We see that the introduction of theSL outputs to the RL state results in better performance in all testedscenarios compared to RL Only, which uses the state representationof Fig. 4 without the first two rows.We can observe limitations of the supervised infectiousness pre-diction model in Tab. 4, where the S2cost does not decrease ascluster size increases—from Thm. 1, we can conclude that if pinfis correct, the ratio of S1toS2should not depend on cluster sizefor Threshold. There are several possible causes of this issue. First,the SL model outputs might be miscalibrated, as is often the casefor neural networks trained on highly imbalanced data. This issuecould be fixed with post-hoc calibration such as Platt scaling [ 18].In this instance, a more sophisticated calibration could be employedwith separate calibration parameters per cluster size, if necessary.Second, it may be the case that the SL model outputs are wrong forreasons other than calibration. For example, it may receive insuffi-cient relevant training data as it is trained on data produced from arandom policy and not Threshold or RLSL. It is also possible thatwe performed insufficient architecture search.We also see that RLSL (No Test) often performs better than RLSLas test costs increase. This suggests that RLSL is not finding a trueUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAoptimal policy. This could likely be address by using a wider rangeof initialization values for RLSL—for example, initializing someseeds to policies that test very little (the initialization we use forRLSL and RL Only tests heavily). This observation has a silverlining: RL (No Test) can achieve much stronger performance thanbaselines even without tests. This implies that RL (No Test) is ableto correct for the errors in Threshold to find a policy closer to whatis suggested by Thm. 1.5 DISCUSSION AND FUTURE WORKThis work aims to develop a generic multi-objective optimizationapproach for cluster-level optimization of NPIs. We formulate thisproblem for RL in a branching process environment. We presentinitial results that demonstrate the potential of our approach—in abranching process model of SARS-CoV-2, we can achieve substan-tially higher objective values than baseline policies. The resultingpolicies can be applied across all cluster sizes and do not take muchtime to train on consumer hardware. The policies we propose areable to heavily exploit superspreading dynamics.Our vision for an infectious disease crisis is that a canonical prob-abilistic model of the disease is constructed and updated throughoutthe crisis. The model can be constructed from estimates of key dis-ease parameters that are made from various sources throughout acrisis and can reflect uncertainty in these estimates. We advocatethat superspreading dynamics be given substantial attention inthese early stages due to the substantial influence on interventionsthat we find it can have. Using this canonical model, a branchingprocess environment can be constructed and optimized against aswe propose in this paper. We do not consider uncertainty in theparameters of this model, but it is possible to do so with existingtechniques and leads to different RL algorithmic choices dependingon the form of the uncertainty and the desired objective.A key disadvantage of our approach as presented is the com-plexity of the resulting policies. For instance, to execute our RLSLpolicy requires training and drawing outputs from two neural net-works. In contrast, policies that were employed in the SARS-CoV-2pandemic consisted of short lists of rules. We believe that this isnot an inherent weakness of our approach—we can leverage inter-pretable ML and RL techniques to “distill” the RLSL policies into,say, low-depth decision trees, allowing them to be applied at scalewith low logistical cost. There will be some decrease in quality, butwe suspect still substantial advantage over baselines.An area for future study is cost and benefit of taking a cluster-rather than individual-level view of policy application. This imposesadditional logistical costs and the benefit is dependent on the degreeof cluster-level transmission heterogeneity that is present. Thistrade-off is not well understood and is a critical area for futurework.
9xxjrp7gXOo
Review
3: Marginally above acceptance threshold
### Summary This work proposes to use reinforcement learning for optimizing multi-objective infectious disease control policy in a branching process environment. In the approach, this paper uses a convolutional neural network to estimate the probability of infectiousness for each individual in a cluster and use the outputs as the state of the RL agent. This work evaluates the proposed approach in a branching process simulated for SARS-CoV-2 and compares the approach with baseline policies. The baselines include thresholding, Symptom-Based Quarantine, 14-Day Quarantine, and no quarantine. The results show that the proposed approach achieves higher objective values than the baselines across multiple parameter settings. ### Weaknesses - The environment setup needs to be further explained. For example, it would be better to provide formal definitions of the branching process environment, including the states and necessary parameters. Moreover, the example illustrated in Figure 2 is confusing. For example, it would be better to explain what factors cause the state changes in different clusters. - Further discussion and comparison with related work need to be incorporated. It would be better to provide a more detailed discussion with related work, especially previous decision-making methods or RL methods for optimizing intervention policy. For example, the related work of RLGN [1]. It would be better to compare such methods in the experiments. Moreover, the motivation for using branching processes and cluster-based view needs to be further elaborated. - More details in the experiments need to be included. For example, the detailed setup of the branching processes for SARS-CoV-2 and its hyper-parameter settings and the details of how training examples are generated. Including such details help better interpret the results of the comparison. [1] Eli Meirom, Haggai Maron, Shie Mannor, and Gal Chechik. 2021. Controlling graph dynamics with reinforcement learning and graph neural networks. In International Conference on Machine Learning. PMLR, 7565–7577
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Ql4CuaB3-D
KDD.org/2023/Workshop/epiDAMIK
2023
Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization
["Xueqiao Peng", "Jiaqi Xu", "Xi Chen", "Dinh Song An Nguyen", "Andrew Perrault"]
Non-pharmaceutical interventions (NPIs) play a critical role in the defense against emerging pathogens. Among these interventions, familiar measures such as travel bans, event cancellations, social distancing, curfews, and lockdowns have become integral components of our response strategy. Contact tracing is especially widely adopted. However, the optimization of contact tracing involves navigating various trade-offs, including the simultaneous goals of minimizing virus transmission and reducing costs. Reinforcement learning (RL) techniques provides a promising avenue to model intricate decision-making processes and optimize policies to achieve specific objectives, but even modern deep RL techniques struggle in the high dimensional partially observable problem setting presented by contact tracing. We propose a novel RL approach to optimize a multi-objective infectious disease control policy that combines supervised learning with RL, allowing us to capitalize on the strengths of both techniques. Through extensive experimentation and evaluation, we show that our optimized policy surpasses the performance of five benchmark policies.
["reinforcement", "npi optimization", "interventions", "contact", "npis", "critical role", "defense", "pathogens", "familiar measures"]
ABSTRACTNon-pharmaceutical interventions (NPIs) play a critical role in thedefense against emerging pathogens. Among these interventions,familiar measures such as travel bans, event cancellations, socialdistancing, curfews, and lockdowns have become integral compo-nents of our response strategy. Contact tracing is especially widelyadopted. However, the optimization of contact tracing involvesnavigating various trade-offs, including the simultaneous goals ofminimizing virus transmission and reducing costs. Reinforcementlearning (RL) techniques provides a promising avenue to model in-tricate decision-making processes and optimize policies to achievespecific objectives, but even modern deep RL techniques strug-gle in the high dimensional partially observable problem settingpresented by contact tracing. We propose a novel RL approach tooptimize a multi-objective infectious disease control policy thatcombines supervised learning with RL, allowing us to capitalize onthe strengths of both techniques. Through extensive experimenta-tion and evaluation, we show that our optimized policy surpassesthe performance of five benchmark policies.KEYWORDSreinforcement learning, machine learning, contact tracing, publichealthACM Reference Format:Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and AndrewPerrault. 2023. Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining and KnowledgeDiscovery, August 7, 2023, Long Beach, CA, USA. , 7 pages.1 INTRODUCTIONThe COVID-19 pandemic has highlighted the crucial role of non-pharmaceutical interventions (NPIs) in effectively managing thespread of infectious diseases. The implementation of NPIs requiresPermission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).careful consideration of multiple objectives, including the preven-tion of viral transmission and the reduction of costs associatedwith quarantine measures. Contact tracing has emerged as a widelyadopted policy within the realm of NPIs and has been extensivelystudied in the context of COVID-19 [7, 8, 11, 21].Nevertheless, optimizing NPIs remains a challenging open prob-lem in many settings for several reasons. First, the objective is in-herently multi-objective—intensified control efforts lead to highercosts. In addition, sensing actions, such as testing, may be includedin all but the earliest stages of an infectious disease crisis. Thesehave their own costs and constraints associated with them. Sec-ondly, inferring the probability that an individual is difficult forinfections that do substantial transmission asymptomatically, suchas SARS-CoV-2. This inference problem is perhaps surprisingly highdimensional, as we show it is dependent on the symptom statusand test results of all individuals in the same cluster due to thetransmission heterogeneity.Cluster Symptom StatusTest InformationCNNTestInformationInfectionProbabilityIndividual Symptom StatusQuarantineTestIndividual StateActionsSimulatorPPO LearningRewardFigure 1: Illustration of our approach. We combine a infec-tion probability decoder that uses supervised learning witha reinforcement learning-based policy.In this work, our goal is to develop a generic approach for cluster-level optimization of NPIs. To tackle this challenge, we proposea novel approach that integrates convolutional neural networks(CNN) and reinforcement learning (RL) model[ 5,20] (Fig. 1). TheCNN is used to solve the high dimensional infection inferenceproblem and uses a novel representation of the symptom and teststate of the entire cluster as input, allowing a single CNN to betrained for all cluster sizes. The RL agent takes the CNN output andother features as its state and selects an action for each individual(including quarantine and testing) and aims to maximize a multi-objective reward function. This reward function includes a penaltyepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultfor days where an individual is infectious but not isolated, a penaltyfor days where they are quarantined but not infectious, as well as acost for any control action that is taken (e.g., test cost). As a casestudy, we have developed a branching process-based SARS-CoV-2virus simulator, where we evaluate the effectiveness of our method.In this work, we focus on optimization only—in the longer term,we aim to use the results of optimization to automatically discoversimple, implementable policies.This paper makes the following contributions:•We propose a novel RL approach for finding optimal con-tact tracing policies. Our approach combines a supervisedlearning model with an RL model, leveraging the strengthsof both techniques to optimize the desired objectives. Theresulting agent can be trained and deployed simultaneouslyacross all cluster sizes.•We show the existence of a theoretically simple, yet optimal,threshold type policy for contact tracing in the setting whereno sensing actions are available. Running this policy requiressupervised learning only.•We develop a simple branching process-based model forSARS-CoV-2 and compare our policies with baselines. Weshow that we achieve better rewards across a range of ob-jective parameters.Related work. We identify two main thrusts of work that optimizecontact tracing and NPIs: network and branching process. Networkmodels represent connections between individuals as edges in apossibly dynamic contact graph [ 4,9,12,15,16]. These approachescan leverage network structure in their decisions but make thestrong assumption that the entire contact network is known. Theclosest existing approach to ours is RLGN [ 12], which formulatesthe problem as a sequential decision-making task within a tempo-ral graph process. These approaches often consider a fixed budgetof interventions rather than a multi-objective reward function. Incontrast, branching processes are used, resulting in a cluster-based,tree-structured view of contagion [ 10,13,17]. These approacheshave the advantage of aligning more closely with the informationavailable to public health decision-makers in many practical set-tings (but allow for less expressive policies). All of these modelsare agent-based in the sense that they model individuals ratherthan subpopulations—because contact tracing decisions depend onthe specific time that certain events happen for individuals (e.g.,exposure, symptoms), the additional detail that agent-based modelsprovide is valuable for modeling and optimization.2 BRANCHING PROCESS ENVIRONMENTWe take a branching process-based view of an infectious diseasecrisis (Fig. 2). We track two generations of potential individuals:the seed case and their contacts. We assume that interventionsbegin after a reporting and tracing delay. At that point, day tstart(tstart=3in Fig. 2), we observe the symptom history for each agentup to daytand must decide which action to take for each agent(e.g., quarantine, test). On day t, we observe the symptom state ofeach agent plus the results of any sensing actions (defined below)we have taken up to day tand must decide what action to take foreach agent on day t. The simulation proceeds for a fixed period oftime untilT.Close ContactsTime(days)Seed CaseInfectiousWithoutSymptomsWithSymptomsExposedQuarantinedIsolationFigure 2: An agent-based branching process model. The dia-gram depicts standard contact tracing for an example seedcase with six contacts.In Fig. 2, we present an application of contact tracing policy in thebranching process framework. The seed case remains infectious fortwo days without exhibiting symptoms, followed by one day withsymptoms, before entering isolation. In this example, all six contactswere exposed on the same day. Contacts 1 and 4 are infected andshow symptoms on day 2 and day 3, respectively. All contacts areasked for quarantine if their infection probability is higher thana threshold. Contact 3 and contact 5 serve quarantine on day 3.Contact 2 and contact 6 start quarantining on day 4.In an infectious disease crisis, we can use whatever data is avail-able to construct such a branching process model. Many of therequired components are distributions that are often estimated byepidemiologists in the early stages of an outbreak. We describedistributions we used to simulate SARS-CoV-2 and their sourcesin Tab. 1. Components that are not known can be filled in conser-vatively or sensitivity analysis can be performed. In some cases,distributional estimates can be shared across diseases—for exam-ple, POLYMOD [ 14] provides contact distributions for the US andWestern European settings for both droplet and physical contact.The superspreading dynamics of infection can be impactful becauseit is often that most transmission is driven by a small number ofseed cases, and this concentration can be exploited by control poli-cies [ 17]. Nevertheless, superspreading dynamics are often poorlyunderstood, especially early in a crisis and greater understandingwould benefit approaches such as this paper’s.We define the objective function as(−S1−α2×S2−α3×S3)/cluster_size (1)where•S1is the count of transmission days where an infected indi-vidual is not quarantined,•S2is the count of days where a quarantined individual is notinfected, and α2(which we assume is in [0,1]) is the weightfor this term,•S3is the sum of the action costs (e.g., test cost) and α3is theweight for this term, and•cluster_size normalizes the objectives to a score per indi-vidual.In summary, the objective function seeks to minimize the numberof transmission days (i.e., days where an individual is infectiousUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1: Parameters of the SARS-CoV-2 branching process modelParameter Assumed value Details and referencesIncubation timeLog-normal: Log mean 1.57days and log std 0.65 daysMean: 5.94 days. Bi et al. [2]Duration of infectious period7 days—2 days before and5 days after onset if symptomaticBi et al. [2]Probability that an infectedindividual shows symptoms0.8 Buitrago-Garcia et al. [3]Probability of symptomswithout infectiousness0.01 per day Perrault et al. [17]Probability of asymptomatic infection 0.2 Buitrago-Garcia et al. [3]Probability of highly transmissive 0.109 Perrault et al. [17]Infectiousness multiplier forhighly transmissive individuals24.4 Perrault et al. [17]Test parametersTP = 0.86, FP = 0.66TN = 0.14, FN = 0.34Besutti et al. [1]DelaysObservation Delay = 3 daysTest Result Delay = 1 dayAssumedbut not quarantined), minimize the number of days of non-effectivequarantine, and minimize the cost associated with actions.We consider two action types. Quarantine-type actions reduce thenumber of transmission days for an agent. The simplest quarantine-type action causes an agent to not produce a transmission daywith probability 1 and incurs no additional cost. A more complexquarantine-type action may work probabilistically (because an indi-vidual may not choose to quarantine if directed), incur an additionalcost (e.g., the cost of checking in with that individual by phone), ormay be coupled with a sensing action (see below). Quarantine-typeactions are that they contribute to S2if the individual quarantinesand is not infected.Sensing-type actions do not directly affect the number of trans-mission days directly. Instead, they reveal information about anindividual’s infectious state according to a probability distribution.For example, if someone has had known exposure to someone in-fected, but he/she doesn’t show the symptoms. With antigen tests,we can know whether this person is infected or not. Actions cancombine both sensing and quarantine, e.g., an action that performsan antigen test and then quarantines if the result is positive.3 APPROACHWe show that the optimization problem from the previous sectioncan be formulated as a partially observable Markov decision pro-cess (POMDP). However, solving this POMDP directly is wildlyintractable. Some hope arrives from the result that, under a sim-plified model that contains only sensing-type actions, the POMDPcan be solved optimally if the probability that an individual is in-fectious can be estimated—itself a challenging problem due to thehigh dimensional observation space.Motivated by this conclusion, we formulate our solution ap-proach: we use a convolutional neural network (CNN) to estimatethe probability of infectiousness for each individual in a cluster,and this output, along with cluster-wide statistics, serves as thestate for the RL agent.3.1 POMDP FormulationWe define a POMDP [ 6] as⟨S,A,R,P, Ω,O,γ,S 0⟩, whereSandArepresent the state and action spaces, respectively, R:S×A→Ris the reward function, P:S×A→ΔSis the transition function,Ωis the observation state, O:S×A→ΔΩis the observationprobabilities, γ∈[0,1]is the discount factor, and S0:ΔSis thedistribution of initial states.We briefly describe how to interpret the control problem ofthe previous section as a POMDP. We define the state space ascontaining all of the relevant information required to simulate thecluster, including whether the seed case is highly transmissive,whether each contact of a seed case will become infected, whetherthey will show symptoms and if so, on what day. This simulatordata cannot be observed directly—instead we must rely on receivingaction-dependent observations. We define the action space as theset of daily quarantine and sensing actions that are available foreach individual in the cluster. For instance, in our experiments, weconsider five actions: no quarantine and no test, quarantine andno test, test and no quarantine, test and quarantine, and test andquarantine only if positive. If we have Nindividuals in the cluster,we have an action space of size |A|N. For observations, we receivetwo types of information from each individual in each timestep:symptom information and test results. We receive test results onlywhen a sensing-type action is taken and these results are noisy(Tab. 1). Similarly, we always observe symptoms if they are present,but both infectiousness without symptoms and symptoms withoutinfectiousness are possible. The resulting observation space size is4N.In principle, solving the POMDP formulation results in the opti-mal control policy. In practice, exact solving is not possible due tothe high computational complexity of the best-known algorithms.A particular source of difficulty is the problem of calculating theposterior probability of infection for each individual given the ob-servations. A key challenge is that the variation in infectiousness ofthe seed case causes the posterior probability of infection for eachepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultindividual to depend on the observations for all other individuals.Intuitively, observing symptoms or positive test results for one indi-vidual makes it more likely that the seed case is highly transmissiveand thus more likely that each other individual is infected.3.2 Optimal Policy Without Sensing ActionsWe first consider a simplified POMDP where the only actions avail-able are a quarantine action and no quarantine action. We showthat, if the posterior probability of infection can be calculated ex-actly, the optimal policy has a threshold-type form: if the posteriorprobability of infection is above a threshold, we quarantine andotherwise do not. We show this initially for a costless quarantineaction with 100% efficiency as this is what we use in experiments(Thm. 1). We then generalize the result to any menu of non-sensingactions because the expected reward of each action can be exactlycalculated given the posterior probability of infection (Thm. 2). Weremark that these results provide additional context to the findingsof Perrault et al . [17] by defining the class of optimal risk-basedpolicies.Letpinfrepresent the posterior probability of infection for anindividual given the observations so far.Theorem 1. With a costless quarantine action that is always suc-cessful and a null action, the objective function of Eq. 1, the optimalpolicy is to quarantine if pinf>α21+α2and take the null action other-wise.Proof. Because we have access to the exact posterior probabilityof infection, we can calculate the expected objective value for eachaction exactly:E[r]=−α2·(1−pinf) if quarantined−pinf if not quarantined.(2)We can then show that if pinf>α21+α2, the quarantine action hashigher expected reward. □We can use the above proof technique to derive the optimal policyfor any menu of non-sensing actions. A useful generalization iswhen the quarantine action has a cost and a failure rate.Theorem 2. With a quarantine action with success rate 0≤β≤1and cost 1 and a null action, the optimal policy is to quarantine ifpinf>α2·β+α3(1+α2)·βand otherwise do not.These results highlight the importance of the posterior probabil-ity of infection. We next dedicate our attention to producing usefulestimates of pinf.3.3 Supervised LearningWe could use RL directly to solve the POMDP using the observationinformation as the state. Indeed, we show that this is somewhateffective if we leverage the state representation we develop in thenext section. However, as we know the unobserved infectious statefor each agent in simulation, we hypothesize that using a supervisedlearning model to predict pinfand using this as input to the RLalgorithm will lead to better objective values compared to pureRL (and in the experiments, we see that the improvement is oftensubstantial). Another option for estimating pinfwould be to use analgorithm for approximate probabilistic inference such as Markovchain Monte Carlo, but doing so is challenging due to the highdimensional discrete observation space where most observationshave zero probability for a given state of infectiousness.A key question for applying supervised learning is how to repre-sent the observation space. We have two desiderata. First, we wouldlike the representation to not vary with cluster size. We can alsoachieve this property in the RL agent, resulting in an agent that si-multaneously be deployed across all cluster sizes, which makes bothtraining and deployment simpler. Second, there is an advantage tousing a representation that inherently accounts for the symmetriesthat arise due to the ordering of individuals, i.e., if we permute theorder of individuals in an observation, it should not affect pinfforeach individual.After testing several representations that satisfy these properties,we arrive at the 7×Tmatrix shown in Fig. 3, where Tis the sim-ulation length (in our experiments, T=30). This is an egocentricrepresentation of the observation—it is from the perspective of aparticular contact and contains all information gathered so far. Wetrain the supervised learning model fto produce output dimension[0,1]T, i.e., for every day of the simulation, what is the probabil-ity that the agent will be infectious given the observation usingsimulation outputs where the infectiousness of each individual isprovided.The representation contains the following information. The firstrow is 1 for each day after (inclusive) that the individual showssymptoms. The second row is a binary indicator of whether thisday is in the future (1 if yes). The third row is a count of the numberof individuals in the cluster that have shown symptoms up to (in-clusive) day t. The fourth row is the total number of contacts in thecluster minus 1 (constant across time). The fifth row is t. The sixthrow is 1 if a test was conducted for this individual, and the sixthrow represents the results of that test (with a one-day delay). In row2, 0s are used to indicate that observation was made by this dayand 1s represent the future. In row 6 and 7, 0s are used to representthe future (no test was ordered and no results were received).We will show that this representation can achieve an AUC of0.95 to predict infectiousness for our branching process model ifan appropriate architecture is selected.0111...0001...3333...9999...0123...0110...0010...Symptoms shown by day t?Total symptom count in clusterCluster Size - 1tTest on day t?Day t-1 test positive?0 for past and present, 1 for futureFigure 3: The observation representation used for supervisedlearning, shown on a cluster of size 10 after observing theoutcome of day 2.Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA2D ConvolutionLinear Layer0.080.170.250.60.40.5011001000101010001Observed StateInfection Probabilitypinf for past three dayspinf for next three daysSymptom indicator for past three daysTest indicator for past three daysTest results for last three daysCluster SizeCNNNumber of tests run across cluster in past three days7*30Input MatrixFigure 4: The supervised learning (CNN) output is used asinput to the RL state which prioritizes immediately relevantinformation.3.4 Reinforcement LearningTo make RL effective, we develop a compact state representationthat includes supervised learning outputs. As with supervised learn-ing, we want the representation to have the same size for all clustersand to naturally encode permutation invariance. The representa-tion we use is a 7×3matrix shown in Fig. 4. As with the suprvisedlearning representation, it is egocentric and time-specific.The first and second rows represent the pinfoutputs from super-vised learning for the last three days and next three days, respec-tively. The third row indicates whether the individual exhibitedsymptoms for each day in the past three days. The fourth row is anindicator for if this individual was tested for each of the past threedays. The fifth row denotes the test results with a one day delay.The sixth row is the cluster size. The last row indicates the numberof tests conducted in the cluster in the past three days.Training the RL algorithm is straightforward. First, we train thesupervised learning predictor from data collected from the simula-tor. In our experiments, we use a fixed but stochastic control policyto collect this data. This has the advantage that a single supervisedlearning training run can serve as input to an arbitrary number ofRL training. If the optimal policies are dramatically different thanthe data collection policy, an addition run of supervised learningtraining can be performed with the current RL policy to increaseits accuracy.Once the supervised learning predictor is trained, we train RLwith Proximal Policy Optimization (PPO) [ 19]. In our experiments,we use six different policy initializations, train each for 800000environment interactions and pick the best based on 100 evaluationruns. All training is performed on a single core, using Intel [email protected] with 8GB of RAM, and a single RL training run takes 20minutes.4 EXPERIMENTSWe compare different control policies in the branching processenvironment we construct for SARS-CoV-2. We consider a set offive control actions for each individual for each day: null action,quarantine, test but don’t quarantine, quarantine but don’t test, andtest and quarantine only if results are positive. We assume thatthere is no failure rate for actions, and all actions that include a testcost 1 and others are costless. For α2, we use small values of 0.01and 0.02as typical SARS-CoV-2 contact tracing policies accept alarge number of quarantine days for non-infectious individuals. Forα3, we use values of 0.001,0.005,0.01,0.02,0.03and 0.2. We samplecluster size from a uniform distribution on (2, 40). The model codeis available online (https://github.com/XueqiaoPeng/CovidRL).4.1 Supervised Learning ModelWe experiment with a variety of supervised learning model archi-tectures (Tab. 2) to find one that achieves a high AUC across clustersizes. We find that CNNs are generally most effective and comparedifferent kernels and layer structures. In single layer architectures,we find that larger 2D convolutions tend to achieve higher AUC.We then found that a single convolution layer followed by a linearlayer performs just as well as deeper architectures—this setup of a(5, 2) 2D convolution followed by a linear layer is what we use inthe experiments below.Table 2: We find that two-layer architectures using a 2D con-volution followed by a linear layer achieve performance onpar with larger models.Cluster size = 4 8 16 321 LayerConv1d (5,2) 0.798 0.807 0.823 0.830Conv1d (5,3) 0.814 0.830 0.835 0.839Conv2d (5,2) 0.800 0.814 0.827 0.830Conv2d (5,3) 0.832 0.820 0.838 0.840Conv2d (5,4) 0.858 0.849 0.843 0.859Conv2d (5,5) 0.864 0.895 0.893 0.8932 LayerConv1d (5,2)0.824 0.830 0.833 0.840Conv1d (1,2)Conv2d (5,3)0.883 0.903 0.898 0.897Conv2d (1,3)Conv2d( 5,2)0.955 0.960 0.947 0.961Linear LayerConv2d (5,3)0.951 0.960 0.940 0.964Linear Layer3 LayerConv1d (5,3)0.958 0.957 0.950 0.961 Conv1d (1,3)Linear Layer4 LayerConv1d (4,3)0.958 0.958 0.953 0.965Conv1d (2,3)Conv1d (1,3)Linear Layer4.2 Benchmark PoliciesWe compare the RLSL approach we propose to several baselines.•Threshold is the threshold-type policy suggested in Sec. 3.2.It does not use test actions. This policy turns out to be highlyconservative and results in long quarantine duration for allcontacts for the tested α2values.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew PerraultTable 3: RLSL achieves higher objective values (higher is better) than baselines across all tested α2andα3.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL (Ours) −3.77±0.25 −10 .27±0.15 −17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32 −25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22Threshold−21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20 −21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32Symptom-BasedQuarantine−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.9414 DaysQuarantine−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00No Quarantine−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38Table 4:S1,S2andS3per individual compared across different cluster sizes (lower is better), using α2=0.01andα3=0.01. Evenrelatively conservative strategies such as 14-day quarantine from exposure fail to isolate some infections in our simulation.RLSL can benefit substantially from the additional information available in large clusters resulting in strong performance withlow test costs.Cluster size = 4 Cluster size = 8 Cluster size = 16 Cluster size = 32S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3RLSL 0.064±0.008 6.808±0.184 10.144±0.052 0.077±0.012 7.552±0.099 11.825±0.056 0.075±0.011 10.033±0.127 11.253±0.087 0.054±0.007 8.259±0.090 10.808±0.134Threshold 0.078±0.013 16.012±0.211 - 0.063±0.013 17.656±0.198 - 0.05±0.008 19.681±0.173 - 0.016±0.003 20.701±0.319 -Symptom-BasedQuarantine1.418±0.199 0.236±0.029 - 1.207±0.187 0.239±0.014 - 1.196±0.052 0.232±0.017 - 1.072±0.146 0.261±0.042 -14-dayQuarantine1.042±0.072 2.469±0.113 - 0.965±0.082 2.440±0.144 - 0.973±0.114 2.291±0.125 - 0.929±0.107 2.004±0.155 -No Quarantine 2.361±0.195 - - 2.597±0.282 - - 2.075±0.203 - - 1.856±0.173 - -Table 5: In cases where test costs are higher, RLSL produces polices that test too often, resulting in lower performance thanRLSL models with only quarantine actions—we discuss potential fixes.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL −3.77±0.25 −10 .27±0.15−17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32−25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22RLSL (Daily Test) −4.30±0.42−13.15±0.15−24.46±0.17−45.62±1.27−74.68±0.2−737.78±3.33−12.81±0.55−23.72±0.47−27.25±0.58−50.50±0.11−75.88±0.26−739.98±1.516RLSL (No Test) −34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±.39−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13RL Only−14.64±0.79−20.32±0.83−34.02±0.70−46.10±1.14−53.22±1.01−84.35±1.04−15.36±0.76−25.66±0.56−39.80±0.39−63.07±0.81−70.56±0.827−162.4±2.36Threshold (SL Only) −21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20−21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32•Symptom-Based Quarantine quarantines if an individualexhibits symptoms on the day before the observed day andotherwise does not.•14-Day Quarantine quarantines individuals from the initialday they exhibit symptoms until either 14 days have passedor until they no longer exhibit symptoms, whichever is later.No test action is included.•No Quarantine always performs the null action.4.3 AnalysisOur experimental results report the average objective value andstandard error taken over 10 random clusters (Tab. 3). We find thatRLSL and Threshold acheive better performance than baselines inall cases. However, our current methods for RLSL struggle relativeto Threshold when tests are expensive. Our experimental resultscould be broadened by including more αvalues and more analysis asto where the RLSL policies gain their advantage (but see discussionof Tab. 5 below for some insights).Focusing on the setting of α1=0.01andα2=0.01, we reportobjective values broken out by component and by cluster size asmeasured per individual (Tab. 4). Here we can get an intuitive graspof what is happening in the different policies. Threshold aggres-sively quarantines, resulting in S2=16–20, i.e., 16–20 days ofquarantine without infection per contact, for the tested αvalues.This is able to drive S1to a low value, resulting in an average objec-tive value of−21.79. Recall that S1is much more highly weighted(100 times) higher than S2in this setting. Symptom-based and 14-day quarantine reduce S2by a factor of 8 to 100, but this causes S1to be roughly 150 to 200 times higher. By leveraging tests, RLSLcan reduceS2by a factor of 2–3 and S1by a factor of 0.8–3.5.In the ablation study (Tab. 5), we gain a more detailed view intothe operation of the RLSL policy. We see that the introduction of theSL outputs to the RL state results in better performance in all testedscenarios compared to RL Only, which uses the state representationof Fig. 4 without the first two rows.We can observe limitations of the supervised infectiousness pre-diction model in Tab. 4, where the S2cost does not decrease ascluster size increases—from Thm. 1, we can conclude that if pinfis correct, the ratio of S1toS2should not depend on cluster sizefor Threshold. There are several possible causes of this issue. First,the SL model outputs might be miscalibrated, as is often the casefor neural networks trained on highly imbalanced data. This issuecould be fixed with post-hoc calibration such as Platt scaling [ 18].In this instance, a more sophisticated calibration could be employedwith separate calibration parameters per cluster size, if necessary.Second, it may be the case that the SL model outputs are wrong forreasons other than calibration. For example, it may receive insuffi-cient relevant training data as it is trained on data produced from arandom policy and not Threshold or RLSL. It is also possible thatwe performed insufficient architecture search.We also see that RLSL (No Test) often performs better than RLSLas test costs increase. This suggests that RLSL is not finding a trueUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAoptimal policy. This could likely be address by using a wider rangeof initialization values for RLSL—for example, initializing someseeds to policies that test very little (the initialization we use forRLSL and RL Only tests heavily). This observation has a silverlining: RL (No Test) can achieve much stronger performance thanbaselines even without tests. This implies that RL (No Test) is ableto correct for the errors in Threshold to find a policy closer to whatis suggested by Thm. 1.5 DISCUSSION AND FUTURE WORKThis work aims to develop a generic multi-objective optimizationapproach for cluster-level optimization of NPIs. We formulate thisproblem for RL in a branching process environment. We presentinitial results that demonstrate the potential of our approach—in abranching process model of SARS-CoV-2, we can achieve substan-tially higher objective values than baseline policies. The resultingpolicies can be applied across all cluster sizes and do not take muchtime to train on consumer hardware. The policies we propose areable to heavily exploit superspreading dynamics.Our vision for an infectious disease crisis is that a canonical prob-abilistic model of the disease is constructed and updated throughoutthe crisis. The model can be constructed from estimates of key dis-ease parameters that are made from various sources throughout acrisis and can reflect uncertainty in these estimates. We advocatethat superspreading dynamics be given substantial attention inthese early stages due to the substantial influence on interventionsthat we find it can have. Using this canonical model, a branchingprocess environment can be constructed and optimized against aswe propose in this paper. We do not consider uncertainty in theparameters of this model, but it is possible to do so with existingtechniques and leads to different RL algorithmic choices dependingon the form of the uncertainty and the desired objective.A key disadvantage of our approach as presented is the com-plexity of the resulting policies. For instance, to execute our RLSLpolicy requires training and drawing outputs from two neural net-works. In contrast, policies that were employed in the SARS-CoV-2pandemic consisted of short lists of rules. We believe that this isnot an inherent weakness of our approach—we can leverage inter-pretable ML and RL techniques to “distill” the RLSL policies into,say, low-depth decision trees, allowing them to be applied at scalewith low logistical cost. There will be some decrease in quality, butwe suspect still substantial advantage over baselines.An area for future study is cost and benefit of taking a cluster-rather than individual-level view of policy application. This imposesadditional logistical costs and the benefit is dependent on the degreeof cluster-level transmission heterogeneity that is present. Thistrade-off is not well understood and is a critical area for futurework.
YsYOhfnQ7D
Good paper with compelling results, could benefit from more intuition for methodological choices
4: Good paper, accept
This paper describes a novel RL approach to optimizing infection disease control policy. The proposed method combines supervised learning with RL and shows strong performance compared to baseline policies. Positives: + The branching-process formulation is well-described and intuitive + The need for an estimate of the probability of infection is well-motivated + The experiments are extensive and clearly show the strengths (and limitations) of the proposed RLSL framework + Overall, the paper is clear and easy to follow Places for Improvements - The intuition of the representation and state space for both the SL and RL settings could be improved. Currently, it seems as though many representations were tested and this one was eventually chosen. Were they tested on validation data? More information about how these representations are necessary - Why was PPO chosen for the RL policy? Were other RL techniques considered? This choice could benefit from more justification - How hyperparameters were chosen should be discussed more. Currently, it almost seems as though the architecture and hyperparameters for the 2D CNN were chosen based on the same set in which policies were evaluated, which would be problematic - As mentioned in the paper, calibration of the SL estimates is critical for the threshold-based approach. The authors should consider calibrating these probabilities and evaluating the calibration error in some way, to see if it can improve all methods, especially the threshold-based baseline
4: The reviewer is confident but not absolutely certain that the evaluation is correct
fhxHhXTnHc
KDD.org/2023/Workshop/epiDAMIK
2023
Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs
["Serina Chang", "Adam Fourney", "Eric Horvitz"]
To design effective vaccine policies, policymakers need detailed data about who has been vaccinated, who is holding out, and why. However, existing data in the US are insufficient: reported vaccination rates are often delayed or missing, and surveys of vaccine hesitancy are limited by high-level questions and self-report biases. Here, we show how large-scale search engine logs and machine learning can be leveraged to fill these gaps and provide novel insights about vaccine intentions and behaviors. First, we develop a vaccine intent classifier that can accurately detect when a user is seeking the COVID-19 vaccine on search. Our classifier demonstrates strong agreement with CDC vaccination rates, with correlations above 0.86, and estimates vaccine intent rates to the level of ZIP codes in real time, allowing us to pinpoint more granular trends in vaccine seeking across regions, demographics, and time. To investigate vaccine hesitancy, we use our classifier to identify two groups, vaccine early adopters and vaccine holdouts. We find that holdouts, compared to early adopters matched on covariates, are 69% more likely to click on untrusted news sites. Furthermore, we organize 25,000 vaccine-related URLs into a hierarchical ontology of vaccine concerns, and we find that holdouts are far more concerned about vaccine requirements, vaccine development and approval, and vaccine myths, and even within holdouts, concerns vary significantly across demographic groups. Finally, we explore the temporal dynamics of vaccine concerns and vaccine seeking, and find that key indicators emerge when individuals convert from holding out to preparing to accept the vaccine.
["COVID-19", "vaccination", "health behaviors", "misinformation", "search logs", "graph machine learning"]
ABSTRACTTo design effective vaccine policies, policymakers need detaileddata about who has been vaccinated, who is holding out, and why.However, existing data in the US are insufficient: reported vacci-nation rates are often delayed or missing, and surveys of vaccinehesitancy are limited by high-level questions and self-report biases.Here, we show how large-scale search engine logs and machinelearning can be leveraged to fill these gaps and provide novel in-sights about vaccine intentions and behaviors. First, we developavaccine intent classifier that can accurately detect when a useris seeking the COVID-19 vaccine on search. Our classifier demon-strates strong agreement with CDC vaccination rates, with corre-lations above 0.86, and estimates vaccine intent rates to the levelof ZIP codes in real time, allowing us to pinpoint more granulartrends in vaccine seeking across regions, demographics, and time.To investigate vaccine hesitancy, we use our classifier to identifytwo groups, vaccine early adopters andvaccine holdouts . We findthat holdouts, compared to early adopters matched on covariates,are 69% more likely to click on untrusted news sites. Furthermore,we organize 25,000 vaccine-related URLs into a hierarchical ontol-ogy of vaccine concerns, and we find that holdouts are far moreconcerned about vaccine requirements, vaccine development andapproval, and vaccine myths, and even within holdouts, concernsvary significantly across demographic groups. Finally, we explorethe temporal dynamics of vaccine concerns and vaccine seeking,and find that key indicators emerge when individuals convert fromholding out to preparing to accept the vaccine.KEYWORDSCOVID-19, vaccination, search logs, graph machine learningACM Reference Format:Serina Chang†, Adam Fourney, and Eric Horvitz. 2023. Accurate Measuresof Vaccination and Concerns of Vaccine Holdouts from Web Search Logs.InepiDAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop onEpidemiology meets Data Mining and Knowledge Discovery, August 7, 2023,Long Beach, CA, USA. ACM, New York, NY, USA, 19 pages.1 INTRODUCTIONCOVID-19 vaccines provide significant protection against severecases of SARS-CoV-2 [ 46,59], yet a large portion of the United†Research performed during an internship at Microsoft.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA©2023 Copyright held by the owner/author(s).States remains unvaccinated. Effective vaccine policies—for exam-ple, where to place vaccine sites [ 49,74], how to communicateabout the vaccine [ 18,72], and how to design campaigns to reachunvaccinated populations [ 5,22,60]—rely on detailed data aboutwho is seeking vaccination, who is holding out, and why. However,existing data are insufficient [ 43]. Reported vaccination rates are fre-quently delayed [ 2], missing at the county-level and below [ 70], andmissing essential demographic data [ 33,42]. Surveys provide a start-ing point for understanding vaccine hesitancy but are often limitedby high-level questions [ 16], small or biased samples [ 13,71], andself-reporting biases (e.g., recall or social desirability bias) [ 3,66]especially in sensitive contexts such as vaccination [36].Here, we demonstrate how large-scale search logs from Bingand machine learning (ML) can be leveraged to fill these gaps, en-abling fine-grained estimation of vaccine rates and discovering theconcerns of vaccine holdouts from their search interests. Whilesearch logs are powerful, with widespread coverage, real-time sig-nals, and access to personal interests, the vast amounts of data theyprovide are unlabeled and unstructured, consisting of billions ofnatural language queries and clicks on search results. To derivemeaning from these queries and clicks, we first impose structure byconstructing query-click graphs , which encode aggregated query-click patterns as bipartite networks. Second, using a combinationof semi-supervised graph ML techniques and manual annotation,we develop two computational resources that enable us to extractvaccine behaviors from large unlabeled search logs.First, we develop a vaccine intent classifier that can accuratelydetect when a user is seeking the COVID-19 vaccine on search. Ourclassifier achieves areas under the receiver operating characteristiccurve (AUCs) above 0.90 on held-out vaccine intent labels in allstates, and demonstrates strong agreement with CDC vaccinationrates across states ( r=0.86) and over time ( r=0.89). Using ourclassifier, we can estimate vaccine intent rates to the level of ZIPcode tabulation areas (ZCTAs), approximately 10x the granularityof counties and preceding lags in reporting. We carefully correct forbias in our estimates from non-uniform Bing coverage, and demon-strate minimal additional bias from our classifier, as it achievesequivalent true and false positive rates across regions.Second, we construct a novel ontology of COVID-19 vaccine con-cerns on search. Our ontology consists of 25,000 vaccine-relatedURLs, clicked on by Bing users, that we organize into a hierarchy ofvaccine concerns from eight top categories to 36 subcategories to156 low-level URL clusters. Unlike surveys, our ontology discoversthese concerns directly from users’ expressed interests and exploresthem at multiple scales. Furthermore, by measuring individuals’interest in each concern from their clicks, we capture revealed pref-erences, side-stepping potential biases in self-reporting [24, 66].1epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. HorvitzCombining our ontology with the vaccine intent classifier al-lows us to conduct a thorough analysis of how individuals’ vaccineconcerns relate to whether they decide to seek the vaccine. Weuse our classifier to identify two groups of users—vaccine earlyadopters and vaccine holdouts—and compare their search behav-iors. We identify significant differences in their vaccine concernsand news consumption; for example, compared to early adoptersmatched on covariates, vaccine holdouts are 69% more likely to clickon untrusted news sites. We find that vaccine concerns also differsignificantly even within holdouts, varying across demographicgroups. Finally, we analyze the temporal dynamics of vaccine con-cerns and vaccine seeking, and discover that individuals exhibittelltale shifts in vaccine concerns when they eventually convertfrom holding out to preparing to accept the vaccine.Our contributions can be summarized as follows:(1)A novel vaccine intent classifier, developed with graph MLand human annotation, that achieves AUCs above 0.9 on allstates and strong agreement with CDC vaccination rates;(2)Bias-corrected estimates of vaccine intent rates from ourclassifier, including estimates for over 20,000 ZCTAs;(3)A hierarchical ontology of COVID-19 vaccine concerns, in-cluding 25,000 URLs clicked on by Bing users, 156 URL clus-ters, 36 subcategories, and eight top categories;(4)Analyses of vaccine holdouts’ search concerns and newsconsumption, comparing to early adopters and studyingdynamics over time.We are publicly releasing our code, vaccine estimates, and ontol-ogy.1We hope that our resources, methods, and analyses can pro-vide researchers and public health agencies with valuable insightsabout vaccine behaviors, helping to guide more effective, data-driven interventions.2 DATAOur work uses a variety of datasets, including Bing search logs,CDC vaccination rates, US Census data, and Newsguard labels(Figure 1). Bing is the second largest search engine worldwide andin the US, with a US market share of around 6% on all platforms andaround 11% on desktop [ 65]. Despite having non-uniform coverageacross the US, Bing has enough penetration in the US that we canestimate representative samples after applying inverse proportionalweighting (Section 4). The Bing data we use consist of individualqueries made by users, where for each query, we have informationincluding the text of the query, an anonymized ID of the user, thetimestamp, the estimated geolocation (ZIP code, county, and state),and the set of URLs clicked on, if any. Since our work is motivatedby insufficient vaccine data and vaccine concerns in the US, we limitour study to search logs in the US market. However, the methods weintroduce could be extended to study vaccination rates and vaccineconcerns in other languages and countries. We apply our vaccineintent classifier (Section 3) to all Bing search logs in the US fromFebruary 1 to August 31, 2021.21https://github.com/microsoft/vaccine_search_study.2February 2021 was the earliest that we could study following data protection guide-lines, which allow us to store and analyze search logs up to 18 months in the past.We end in August 2021, since the FDA approved booster shots in September and ourmethod is not designed to disambiguate between vaccine seeking for the primaryseries versus boosters.Bing search logsOntology of vaccine concernsVaccine intent estimatesZIP , county, stateVaccine concerns of holdouts vs. early adoptersMatched vaccine holdouts and early adoptersNews consumption of holdouts vs. early adoptersDemographic trends in vaccine intentNewsguardlabelsCDC vaccination ratesGoogle search trendsUS Census dataVal.Val.Methods: community detection on graphs, manual annotationMethods: PageRank, GNNs, manual annotation, bias correctionExternal dataOur workLegendVal.:validationFigure 1: Our work integrates a variety of datasets and meth-ods to analyze vaccine behaviors from search logs.To evaluate our vaccine intent classifier, we compare it to vacci-nation rates reported by the CDC (Section 4). The CDC providesdaily vaccination rates at the levels of states [ 27] and counties [ 26].CDC data are essential but limited, with a substantial portion ofcounty-level data missing. These limitations serve as one of themotivations of our work, since we hope that our vaccine intent clas-sifier can serve as a complementary resource to monitor vaccinationrates, especially in smaller regions. To characterize demographictrends in vaccine intent, we use data from the US Census’ 20205-year American Community Survey [ 15]. To capture political lean,we use county-level data from the 2020 US presidential election [ 53].To quantify the trustworthiness of different news sites, we use labelsfrom Newsguard [ 52]. Finally, to evaluate the representativenessof Bing search trends, we compare them to Google search trends,which are publicly available online [34].Data ethics. Our work was approved by the Microsoft IRB officeand by an internal privacy review process which included officersfrom both Microsoft Research and the Bing product team. When weuse search logs, we are mindful of the need to balance privacy andsocial benefits when using potentially sensitive user data. Whilewe study individual search logs, since we need to be able to link in-dividual vaccine outcomes (as predicted by our classifier) to searchinterests, those sessions are assembled using only anonymous useridentifiers, which are disassociated from any specific user accountsor user profiles, and cannot be linked to any other Microsoft prod-ucts. Likewise, in this anonymous view of the logs, location anddemographic data were limited to ZIP code-level accuracy. Finally,we are careful to only report results aggregated over thousands ofindividuals. Aside from Bing search logs, all of the data sources weuse are publicly available and aggregated over many individuals.3 VACCINE INTENT CLASSIFIEROur first goal is to develop a classifier that can accurately detectwhen a search user is expressing vaccine intent, i.e., trying to getthe COVID-19 vaccine (e.g., book an appointment or find a loca-tion). Detecting vaccine intent requires precision: for example, if2Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAu1u2q1q2q3q4u1u2u3.........Step 1: URL candidatesPersonalized PageRankStep 2: AnnotationAmazon Mechanical Turkq1q2q3q4u1u2u3......Step 3: URL expansionGraph neural networkGiven that a person clicked on this page during a search session, how sure are you that this person is seeking to get the COVID-19 vaccine?Figure 2: Our pipeline of methods to identify a large, high-precision set of vaccine intent URLs.a user issues the query [covid vaccine], they may be trying to getthe vaccine, but they could also be generally curious about vaccineinformation or eligibility. Thus, we begin by defining a set of regu-lar expressions that allow us to identify vaccine intent queries, i.e.,queries that unambiguously express vaccine intent. To be included,the query must include both a COVID-19 term (“covid” or “coro-navirus”) and a vaccine term (“vaccin”, “vax”, “johnson”, etc.). Inaddition, the query must satisfy at least one of the following criteria:(1) matching some variant of “find me a COVID-19 vaccine”, (2)containing appointment-related words or location-seeking words,(3) containing a pharmacy name.However, in addition to maintaining high precision, we seek todetect as many users as possible who have expressed vaccine intent,so that we have sufficient statistical power for our downstreamanalyses. Since our search logs contain both queries and clicks, welose the opportunity to detect many more users if we only detectvaccine intent based on queries. For example, a user may issue theambiguous query [covid vaccine], but then click on the URL forthe CVS COVID-19 vaccine registration page, thus clarifying theirintent through their clicks [ 61]. The challenge with URLs is thatthey are less formulaic than queries, so we cannot easily defineregular expressions to identify URLs expressing vaccine intent.Our key insight is that, while we cannot use regular expressionsto identify URLs, we can use them to identify vaccine intent queriesand then use those queries to identify URLs, based on commonquery-click patterns. For example, vaccine intent queries such as[cvs covid vaccine] or [covid vaccine near me] may result in clickson the CVS COVID-19 vaccine registration page. To capture thesepatterns, we construct query-click graphs [20,45], which are bipar-tite networks between queries and URLs where an edge from aquery to a URL indicates how often this query is followed by a clickon this URL. Specifically, we construct a query-click graph per USstate, aggregating over queries and clicks from two representativemonths in our study period (April and August 2021). Then, ourpipeline proceeds in three steps (Figure 2): first, we use personal-ized PageRank to propagate labels from queries to URLs, so that wecan generate a set of URL candidates (Section 3.1); next, we presentthe URL candidates to annotators on Amazon Mechanical Turk tolabel as vaccine intent or not (Section 3.2); finally, we use thoselabels to train graph neural networks (GNNs) so that we can furtherexpand our set of vaccine intent URLs (Section 3.3).State URLCA https://myturn.ca.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.goodrx.com/covid-19/walgreenshttps://www.costco.com/covid-vaccine.htmlhttps://www.walgreens.com/topic/promotion/covid-vaccine.jspNY https://covid19vaccine.health.ny.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://vaccinefinder.nyc.gov/https://www.goodrx.com/covid-19/walgreensTX https://www.cvs.com/immunizations/covid-19-vaccinehttps://vaccine.heb.com/https://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://corporate.walmart.com/covid-vaccinehttps://dshs.texas.gov/covidvaccine/FL https://www.publix.com/covid-vaccinehttps://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://floridahealthcovid19.gov/vaccines/https://www.goodrx.com/covid-19/walgreensTable 1: Top 5 URLs from Personalized PageRank (S-PPR) forthe four largest states in the US.3.1 Personalized PageRank for URL candidatesPersonalized PageRank [ 14] is a common technique for seed expan-sion, where a set of seed nodes in a graph are identified as membersof a community, and one wishes to expand from that set to identifymore community members [ 40]. In our case, the vaccine intentqueries act as our seed set, and our goal is to spread the influencefrom the seed set over the rest of the query-click graph. Given aseed setS, personalized PageRank derives a score for each node inthe graph that represents the probability of landing on that nodewhen running random walks from S.We run personalized PageRank from the seed set of vaccineintent queries (S-PRR) to derive scores for all URLs in each query-click graph. Then, we order the URLs from each state according totheir S-PPR ranking and keep the union over states of their top 100URLs as our set of URL candidates, resulting in 2,483 candidates.The number of URLs we have in the union is much lower than thenumber of states multiplied by 100, since there is overlap betweenstates. However, there is also substantial heterogeneity in top URLsacross states, reflecting state-specific vaccine programs and policies(Table 1). By constructing separate graphs and running S-PPR perstate, our approach is uniquely able to capture this state-specificheterogeneity. In supplementary experiments, we show that an al-ternative approach that uses a combined graph over states severelyhurts performance for small states (Section A2.2).S-PPR also provides scores for all queries in the graph, but wefound that the seed set was comprehensive in identifying vaccineintent queries. The top-ranked queries that were not in the seed settended to be location-specific, such as [covid vaccine new york],which is suggestive of vaccine intent but not unambiguous enough.Thus, in the subsequent steps of annotation and GNN expansion,we only seek to add URLs, and consider regular expressions suffi-cient for identifying queries. However, we also selected a sample3epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzof regular expression-detected queries to present to annotators, tovalidate whether they were truly vaccine intent. To capture a di-verse sample, we use the union over the top 5 and bottom 5 queriesper state (ranked by S-PPR), after filtering out queries that wereissued by fewer than 50 users, resulting in 227 queries to label.3.2 Annotation on Amazon Mechanical TurkIn this step, we present our URL candidates (and sampled queries)to annotators on AMT. For each URL, we first present it to threeannotators. If all three give it a positive label (i.e., Highly Likely orLikely), then we label this URL as vaccine intent. If two give it apositive label and one does not, we assign it to one more annotator,and label it as vaccine intent if that annotator gives a positive label.In other words, we require vaccine intent URLs to receive threepositive annotations. With this relatively strict bar, we still find thata large majority (86%) of our URL candidates are labeled as vaccineintent. Furthermore, we observe a clear relationship between S-PPRrank and the percentage labeled as vaccine intent: for example,around 90% of URLs from ranks 0 to 20, around 81% of URLs fromranks 40-60, and around 71% of URLs from ranks 80 to 100 (FigureA2). We also find a very high positive rate (96%) among the queriesthat we tested, thus validating our regular expressions.3.3 Graph neural networks for expansionSince manual annotation is expensive, we wish to augment ourefforts by training ML models on the AMT labels, then use themodels to expand our set of vaccine intent URLs. We formulate thisproblem as semi-supervised node classification on a graph, sincethe URLs are nodes in the query-click graph and we are trying topredict whether a URL indicates vaccine intent or not, given labelsfor a subset of URLs. In this section, we provide an overview of ourmodeling procedure, with details in Section A1.GNN architecture and training. To solve this problem, we designa GNN [ 39] that consists of character-level convolutions (CNN)and graph convolutions. We use the CNNs to capture textual infor-mation in the queries and URLs, since text can be informative forthis problem (e.g., the appearance of “vaccine”). The graph convo-lutions allow us to learn representations of URLs that draw fromthe representations of their neighboring queries, which draw fromthe representations of their neighboring URLs, and so on. In thisway, we can capture “similar” URLs in embedding space (similar interms of both text and graph structure).To train and test our model, we randomly split the URL labelsinto a train set (60%), validation set (15%), and test set (25%). How-ever, some states have much smaller graphs, and therefore, fewerpositive and negative labels. For example, for Wyoming, we onlyhave 245 positive and 276 negative URLs. We find that with suchfew labels, the model cannot adequately learn how to predict vac-cine intent, with AUCs far below those of large states (Table A1). Toaddress this issue, we pre-train the model on S-PPR rankings, whichrequires no additional supervision. Our intuition is that S-PPR al-ready performed remarkably well at predicting vaccine intent, aswe discussed in the prior section. Furthermore, S-PPR rankings donot require any manual labels; we derive them entirely from ourinitial vaccine intent queries, which were automatically labeledusing regular expressions. This pre-training encourages the modelto learn URL representations that are predictive of S-PPR rankings,which we find help substantially with predicting vaccine intent.Evaluating GNN performance. We evaluate model performanceby computing its AUC on the held-out test set. Furthermore, toaccount for randomness from model training and data splitting,we run 10 random trials for every model/state, where in each trial,we re-split the URL labels, retrain the model on the train set, andre-evaluate the model’s performance on the test set. First, we findthat pre-training significantly improves performance for the smallerstates; for example, the mean AUC for Wyoming increases from 0.74to 0.95 (Figure 3a, Table A1). We find that pre-training seems un-necessary for the larger states, such as Connecticut and Tennesssee,where we are already achieving high AUCs above 0.98. After in-corporating pre-training for smaller states (fewer than 5,000,000nodes), we are able to achieve AUCs above 0.90 for all 50 states andabove 0.95 for 45 states (Figure 3b).Discovering new vaccine intent URLs. Finally, we use our trainedGNNs to identify new vaccine intent URLs. In order to decide whichnew URLs to include, we need a score threshold. Our goal is to setthe threshold such that any URL that scores above it is very likelyto truly be vaccine intent (i.e., we want to maintain high precision).Borrowing the idea of “spies” from positive-unlabeled learning [ 8],our idea is to use the held-out positive URLs in the test set todetermine where to set the threshold. We consider two thresholds:(1)tmed, the median score of the held-out positive URLs, and (2)tprec, the minimum threshold required to achieve precision of atleast 0.9 on the held-out test set. Then, we only include URLs thatpass both thresholds in at least 6 out of the 10 random trials. Evenwith this strict threshold, we discover around 11,400 new URLs(Table A2), increasing our number of vaccine intent URLs by 10x. Inthe following section, we also evaluate the impact of adding theseURLs on our ability to estimate regional vaccine intent rates. Wefind that the new URLs not only increase our coverage of vaccineintent users by 1.5x but also further improve our agreement withreported vaccination rates from the CDC (Table 2).4 ESTIMATING VACCINE INTENT RATESUsing our classifier, we can estimate regional rates of vaccine intent.In this section, we discuss how we correct for bias in our estimates,validate against CDC vaccination rates, and use our estimates toderive insights about fine-grained vaccination trends.Bias evaluation. In Section A2, we decompose potential bias inour approach into two key sources: first, bias from non-uniformBing coverage, and second, bias from non-uniform true positiverates (TPR) and false positive rates (FPR) of our classifier. We showthat, if we can correct for non-uniform Bing coverage and showthat our classifier’s TPRs and FPRs do not significantly differ acrossregions, our vaccine intent estimates should, theoretically, formunbiased estimates of true vaccination rates. We evaluate our clas-sifier’s TPRs and FPRs on held-out vaccine intent labels, using thesame score threshold we used for discovering new vaccine intentURLs. We find that our classifier does indeed achieve statisticallyequivalent TPRs and FPRs across states (Figure 3b), suggesting thatour classifier contributes minimal additional bias. We discuss belowhow we correct for non-uniform Bing coverage. Additionally, to4Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(b)Results across all statesWith pre-trainingWithout pre-trainingWith pre-training for smaller statesWyomingArea under ROC curveTrue positive rateFalse positive rateTrue positive rateFalse positive rateFalse positive rate# nodes in state graph# nodes in state graph# nodes in state graphFigure 3: (a) GNN results with and without pre-training for Wyoming, one of the smallest states. Each line represents one of 10random trials. (b) Final GNN results for all 50 states, with pre-training for smaller states. Each dot represents a state, with itsy-coordinate representing the mean metric over 10 trials and grey bars indicating standard deviation.Pipeline step CDC corr. # vaccine intent usersOnly queries 0.62 3.18M+manual URLs 0.80 4.95M+manual and GNN URLs 0.86 7.45MTable 2: Each step of our classification pipeline (Section 3)improves both our correlation with CDC vaccination ratesand our coverage of vaccine intent users.evaluate the representativeness of Bing data, we compare searchtrends for vaccine intent queries between Google and Bing and findthat, even before applying corrections to Bing data, the trends arehighly correlated (Figure A4).Estimating coverage-corrected rates. When we apply our classifierto Bing search logs from Feburary 1 to August 31, 2021, we find 7.45million “active” Bing users who expressed vaccine intent throughtheir queries or clicks. We focus on active Bing users, i.e., thosewho issued at least 30 queries in a month, since we can reliablyassign them to a location based on their mode ZIP code (or countyor state) from those queries. Given a ZCTA z, we compute N(ˆv,z),the number of active Bing users from zfor whom we detect vaccineintent. Furthermore, we estimate the ZCTA’s Bing coverage asN(b,z)N(z), whereN(b,z)is its average number of active Bing usersover the months in our study period and N(z)is its population sizefrom the 2020 5-year American Community Survey [ 15]. Then, ourcoverage-corrected vaccine intent estimate ̃p(v,z)for ZCTAzis ̃p(v,z)=N(ˆv,z)N(z)N(b,z)N(z)=N(ˆv,z)N(b,z).To estimate the vaccine intent rate for a set Zof ZCTAs, e.g., a stateor county, we simply take the population-weighted average.Comparison to CDC vaccination data. When we compare ourvaccine intent estimates to state-level vaccination rates from theCDC, we observe strong correlation ( r=0.86) on cumulative ratesat the end of August 2021 (Figure 4). Notably, we find that the cor-relation drops to r=0.79if we do not correct for Bing coveragein our estimates. Furthermore, we find that each step of our clas-sification pipeline—only using queries from regular expressions,Proportion of state users with vaccine intentProportion of state population vaccinated (CDC)Figure 4: Comparing CDC state vaccination rates vs. esti-mated vaccine intent rates from Bing search logs.Lag from vaccine intent to CDC reportingProp. state populationgetting first doseProp. state usersshowing first vaccine intentVaccine intent from search logsVaccination data from CDCFigure 5: Rates over time of first vaccine intent (top) vs. firstdose from CDC (bottom) for the four largest states in the US.incorporating manually annotated URLs from personalized PageR-ank and AMT, incorporating URLs found by GNNs—improves bothour correlation with CDC rates and the number of users we are able5epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz(a)(b)(c)United StatesEstimated vaccine intent rate per ZCTAProportion of users with vaccine intentNew York City Estimated vaccine intent rate per ZCTAManhattanStaten IslandBronxQueensBrooklynUS correlation between ZCTA vaccine intent and demographicFigure 6: (a) Using our classifier, we can estimate vaccine intent rates per ZCTA, approximately 10x the granularity of counties.(b) Zooming in on New York City shows that estimated vaccine intent rates vary substantially across ZCTAs, even within thesame city or county. (c) Correlations between ZCTA vaccine intent rates and demographic variables.to identify (Table 2). Notably, if we only use queries, the correlationdrops tor=0.62and we lose 57% of the users we identified withour full classifier, demonstrating the value of adding vaccine intentURLs through our graph ML framework.Additionally, we compare our vaccine intent estimates to theCDC’s vaccination rates over time. We observe strong correlationshere as well, especially if we allow the CDC time series to lag behindthe vaccine intent time series (Figure 5). With lags of 7-15 days(IQR), the median correlation over states reaches r=0.89; withouta lag, the median correlation drops to r=0.78. The CDC’s lagdemonstrates an advantage of our classifier, as it can detect vaccineseeking in real time without delays from reporting.Granular trends in vaccine seeking. Our vaccine intent classifierallows us to pinpoint who was seeking the COVID-19 vaccine,where, and when. We estimate cumulative vaccine intent rates upto the end of August 2021 at the level of ZCTAs (Figure 6a), approx-imately 10x the granularity of counties, which is the finest-grainedvaccination data the CDC provides and, still, with many countiesmissing or having incomplete data [ 70]. We observe substantialheterogeneity in vaccine intent at the ZCTA-level, even within thesame states and counties. For example, when we focus on New YorkCity, we see that Manhattan and Queens have higher vaccine intentrates, and within Queens, ZCTAs in the northern half have higherrates (Figure 6b), aligning with reported local vaccination rates inNew York City [11].We can also use our estimates to characterize demographic trendsin vaccination. When we measure correlations between ZCTA vac-cine intent rate and different demographic variables, we find thatoverall demographic trends from our estimates align closely withprior literature [ 37,41,71,76]. For example, we observe strongpositive correlations with education, income, and population den-sity, and a strong negative correlation with percent Republican(Figure 6c). However, we discover more nuanced trends when welook closer. Demographic trends vary significantly across states(Figure A5), especially for race and ethnicity, and trends changeover time. For example, we estimate that older ZCTAs were muchlikelier to seek the vaccine early in 2021 but this trend fell over time(Figure A6a), reflecting how the US vaccine rollout initially priori-tized seniors [ 38], and we see an increase in vaccine intent frommore Republican ZCTAs in summer 2021 (Figure A6b). Thus, ourclassifier both confirms existing findings and enables new analyseswith finer granularity across regions, demographics, and time.5 SEARCH CONCERNS OF HOLDOUTSWe use our vaccine intent classifier to identify two groups: vaccineearly adopters , who expressed their first vaccine intent before May2021, and vaccine holdouts , who waited until July 2021 to show theirfirst vaccine intent, despite becoming eligible by April.3Comparingthe search interests of these two groups allows us to discover rela-tionships between expressed vaccine concerns, news consumption,and vaccine decision-making. To reduce potential confounding, wematch each holdout with a unique early adopter from the samecounty and with a similar average query count, since we knowthat the populations seeking vaccination changed over time andwe do not want our comparisons to be overpowered by regional ordemographic differences. In our following analyses, we comparethe search interests of the matched sets, with over 200,000 pairs.Vaccine holdouts are more likely to consume untrusted news. First,we analyze the trustworthiness of news sites clicked on by vaccineholdouts versus early adopters. We use ratings from Newsguard,which assigns trust scores to news sites based on criteria suchas how often the site publishes false content and how it handlesthe difference between news and opinion [ 52]. We find that, inthe period while vaccine holdouts were eligible but still holdingout (April to June 2021), holdouts were 69% (95% CI, 67%-70%)likelier than their matched early adopters to click on untrustednews, defined by Newsguard as domains with trust scores below60. Furthermore, we see that as the trust score from Newsguarddegrades, the likelier it was that holdouts clicked on the site, relativeto early adopters (Figure 7a). For example, sites that are known forspreading COVID-19 misinformation, such as Infowars [ 25], RT [ 6],and Mercola [ 31], were much likelier to be clicked on by holdouts.3We did not consider as holdouts those who never showed vaccine intent during ourstudy period, since those users may have gotten their vaccine in ways that are notvisible via search data. In comparison, individuals who did not show their first vaccineintent until July 2021 likely did not receive the vaccine before.6Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(d)(b)(c)Religious concernsExpert anti-vaxHigh-profile anti-vaxEerie fearsExemptionAnti-mandateFake vaccine proof FDA approvalVaccine-caused deathsVaccine developmentTravel restrictionsNatural immunityReproductive healthEfficacy against variantsVaccine proofNews on hesitancyEmployee mandatesDecision-makingBreakthrough casesPost-vax guidelinesSevere side effectsSpecial populationsVaccine incentivesVaccine ratesEfficacy from studiesInfo about J&JInfo about PfizerComparing vaccinesNormal side effectsInfo about ModernaLeans towardsvaccine holdoutLeans towardsearly adopterElevated near vaccine intentReduced near vaccine intentSubcategoryFigure 7: In all subfigures, news/categories are colored from yellow to dark purple to represent most holdout-leaning to mostearly adopter-leaning. (a) The lower the trust rating from Newsguard, the likelier it is that vaccine holdouts click on the newssite, relative to early adopters. (b) Holdouts’ top category concerns include Vaccine Safety, Requirements, and Information, withvarying proportions over time. (c) Comparing holdouts vs. early adopters’ relative probabilities of clicking on each subcategory(from April to June 2021) reveals each group’s distinctive concerns. (d) Near when holdouts express vaccine intent ( ±3 days) inJuly and August 2021, their concerns become much more like the concerns of early adopters, with a few important differences.Ontology of vaccine concerns on search. To characterize vaccine-related search interests in far more detail, we construct a hier-archical ontology of vaccine concerns, defined in terms of 25,000vaccine-related URLs that were clicked on by early adopters or hold-outs. We construct our ontology from the bottom-up: first, we seekto automatically partition the URLs into clusters. Leveraging graphML again, we formulate this as a community detection problemon graphs, and apply the Louvain algorithm [ 12] to the collapsedURL-URL graph (collapsing the bipartite query-click graph overqueries). We find that this approach results in remarkably coher-ent clusters (Table A3), due to the strength of the signal containedin query-click graphs, and outperforms standard topic modelingapproaches such as LDA [ 10]. Based on these clusters, we designa comprehensive set of subcategories and top categories, and sortthe clusters accordingly. For example, we identify one cluster ofnews stories announcing vaccine passport requirements in cities,which we sort under the proof of vaccination subcategory and Vac-cine Requirements top category. This bottom-up approach allowsus to discover and measure vaccine concerns directly from users’search interests and analyze them at multiple scales, providingcomplementary insights to more traditional surveys.In Figure A1, we summarize our resulting ontology, which con-sists of 8 top categories and 36 subcategories. Some top categoriesencompass a number of distinct subcategories: for example, underVaccine Safety, we include normal side effects, severe side effects,concerns about reproductive health, vaccine history and develop-ment, FDA approval, fear of vaccine-caused deaths, and “eerie” fears(e.g., myths about vaccine shedding or becoming magnetic [ 28]).At the top category-level, we find that vaccine holdouts are, by far,the most concerned about Vaccine Safety, which accounts for 23%of their vaccine-related clicks, followed by Vaccine Information(10%) and Vaccine Requirements (9%). We also observe changesin interests over time (Figure 7b): for example, interest in VaccineIncentives increased in May 2021, and interest in Vaccine Effective-ness grew in June 2021, following the spread of the Delta variant.Distinctive concerns of holdouts vs. early adopters. Our ontologyallows us to compare the vaccine concerns of holdouts and theirmatched early adopters. First, during the period from April to June2021, we find that holdouts were 48% less likely than early adoptersto click on any vaccine-related URL. Furthermore, their distributionof concerns within their vaccine-related clicks differed significantly(Figure 7c). Using the subcategories from our ontology, we findthat holdouts were far more interested in religious concerns aboutthe vaccine; anti-vaccine messages from experts and high-profilefigures; avoiding vaccine requirements by seeking exemptions, ban-ning mandates, or obtaining fake proof of vaccination; eerie fearsand vaccine-caused deaths; and FDA approval and vaccine develop-ment. In comparison, early adopters were much more concerned7epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzabout normal side effects, vaccine efficacy, comparing differenttypes of vaccines, and information about each vaccine (Moderna,Pfizer, and Johnson & Johnson). These differences reveal the impor-tance of a fine-grained ontology; for example, at the top categorylevel, we would see that both groups were interested in VaccineSafety but miss that early adopters were more concerned about nor-mal and severe side effects, while holdouts were more concernedabout eerie fears and vaccine-caused deaths. Our approach alsoallows us to study who is expressing these concerns in greater gran-ularity. Even within holdouts, we observe significant variabilityin concerns across demographic groups (Figure A7). For example,holdouts from more Democrat-leaning ZCTAs were particularlyconcerned about FDA approval and vaccine requirements, whileholdouts from more Republican-leaning ZCTAs were more con-cerned about eerie fears and vaccine incentives.Holdouts appear like early adopters when seeking the vaccine.In our final analysis, we exploit the fact that all of our vaccineholdouts eventually expressed vaccine intent to explore how vac-cine concerns change as an individual converts from holdout toadopter. From July to August 2021, we analyze how holdouts’ vac-cine concerns change in the small window ( ±3days) surroundingtheir expressed vaccine intent, compared to their typical concernsoutside of that window. We find that in those windows, holdouts’vaccine concerns nearly reverse, such that they look much morelike early adopters than their typical selves (Figure 7d nearly re-verses 7c). During this time, holdouts become far more interestedin the Johnson & Johnson vaccine, comparing different vaccines,and vaccine incentives, and less interested in anti-vaccine messagesand vaccine fears. Notably, not all early adopter-leaning concernsreverse as dramatically; for example, even while expressing vaccineintent, holdouts remain less interested in the Pfizer and Modernavaccines, which may reflect how vaccine hesitant individuals werequicker to accept the one-shot Johnson & Johnson vaccine, insteadof the two-shot mRNA vaccines [ 21,73]. Furthermore, there aresome early adopter-leaning concerns that holdouts do not pick upon during this time, such as interest in vaccine rates. We hypoth-esize that these concerns are more reflective of an early adopter“persona” rather than of concerns that would become relevant whenseeking the vaccine, such as comparing different vaccines.6 RELATED WORKOur work centers Bing search logs, which have been used to studyother health issues such as shifts in needs and disparities in infor-mation access during the pandemic [ 67,68], health informationneeds in developing nations [ 1], experiences around cancer diag-noses [ 55,56], concerns rising during pregnancy [ 29], and medicalanxieties associated with online search [ 75]. Our efforts build onprior work that extracts insights about the COVID-19 vaccine fromdigital traces, such as social media [ 50,57,58] and aggregated searchtrends [ 7,23,48]. Our work is also related to other efforts to detecthealth conditions online, such as predicting depression from socialmedia [19] and monitoring influenza from search queries [32].Our work seeks to address the challenges of working with digitaltraces [ 24,54] and limitations of prior work [ 32,44] by developingML and human-in-the-loop methods to precisely label search logsand evaluate bias. Furthermore, as one of the first works to use indi-vidual search logs to study the COVID-19 vaccine, we have the rareopportunity to link vaccine outcomes (predicted by our classifier)to the same individual’s search interests. Our graph ML pipeline isalso similar to other “big data” approaches that, due to the scale ofunlabeled data, manually annotate a subset of data, train machinelearning models to accurately predict those labels, then use thosemodels to label the rest of the data [ 17,30,35,47]. We extend thisapproach in several ways, such as by using personalized PageRankto select URLs for more efficient annotation and by setting a strictclassification threshold based on “spies” to ensure high precision.7 DISCUSSIONWe have demonstrated how large-scale search logs and machinelearning can be leveraged for fine-grained, real-time monitoringof vaccine intent rates and identification of individuals’ concernsabout vaccines. There are limitations to our approach: for example,while we can achieve finer granularity than existing data, we stillmiss within-ZCTA heterogeneity in vaccine intent. Furthermore,our efforts to minimize bias in our estimates are substantial butimperfect (e.g., we can only approximate TPRs and FPRs of ourclassifier). We also assume in this work that vaccine intent can bedetected through single queries or clicks, but more sophisticatedmodels could incorporate entire search sessions or browsing databeyond search. However, in favor of simplicity and considerationsof privacy, we label vaccine intent at the query and click-level.Despite these limitations, our resources demonstrate strongagreement with existing data and enable analyses that have not beenavailable before. For example, our fine-grained vaccine intent esti-mates can help public health officials to identify under-vaccinatedcommunities, informing where to place vaccine sites or whom toprioritize in online or real-world outreach programs. Furthermore,our novel ontology and analyses of individuals’ vaccine concernsinform how to intervene, guiding messaging strategies for differentholdout populations. Lastly, our observation that holdouts resembleearly adopters when they eventually seek vaccination indicates thatindividuals might follow similar paths towards vaccine acceptance.Future work could model these trajectories, try to identify key in-fluences (e.g., vaccine mandates), and use these models to ideallyallocate limited resources for interventions.To facilitate policy impact and future research, we are releasingour vaccine intent estimates and our ontology of vaccine concerns.We hope that these resources will be useful for conducting detailedanalyses of COVID-19 vaccine behaviors and vaccination rates. Theontology can also be employed widely in web and social mediaresearch; for example, to study how certain classes of URLs (e.g.,eerie fears) are disseminated on social media or surfaced by searchengines. Finally, we note that our graph ML techniques for intentdetection are applicable beyond vaccines, and could be applied toprecisely detect other intents of interest, such as seeking stimuluschecks or COVID-19 tests. More broadly, we hope that our workcan serve as a roadmap for researchers of how to derive rigorousbehavioral and health insights from search logs, including how toprecisely detect user intents and interests, evaluate and correctfor bias, validate against external data, and release resources topromote reproducibility, transparency, and future work.8Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA
F8k2r_jshnG
Detecting vaccine intent from user search behavior
5: Top 50% of accepted papers, clear accept
The authors study the problem of detecting vaccine intent from Bing search query log data. Briefly (as I understand their method) their goal is to take a query + click graph and label it with whether it represents vaccine intent or not and then use the results of this classification to estimate the number of vaccines that will be administered in a particular zip code tabulation area. To do so, the authors use Mechanical Turk to label an initial set of query-URL click pairs and then apply semi-supervised learning techniques to grow this set of labels. Pretraining in the form of initializing the model to minimize an auxiliary loss is applied to states with less data. The resulting classifier is evaluated to be highly effective at detecting vaccine intent. Then, a bias correction is performed to go from Bing user counts to population counts, as the usage of Bing is not uniform across states. The estimates the authors develop are highly correlated with CDC-reported vaccine counts, but more granular and do not have a reporting delay. The paper is of high quality, generally clear, makes methodological innovations, and likely to be of wide interest. Minor comments: - Section 3 para 1—fairly important to include the precise criteria for inclusion (at least in Appendix) - Giving some overview of the challenge of detecting intent from queries would be helpful for those who have not worked with this kind of data before. For example, in 3.1, the phrase “covid vaccine New York” is mentioned as suggestive but not unambiguous enough. But it is not clear what is missing from this. Is it that the location named is not specific enough? Or is covid vaccine + location always too ambiguous? - How were URLs presented to the annotators? Did they see just the URL or did they see the page it led to? Things that came to mind: - Accuracy of intent classification across time—I believe this is not reported anywhere. This is a pretty important question given the Google Flu Trends experience. - Connect vaccine intent queries to queries about symptoms, e.g., does experiencing symptoms motivate people to seek vaccine information?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
fhxHhXTnHc
KDD.org/2023/Workshop/epiDAMIK
2023
Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs
["Serina Chang", "Adam Fourney", "Eric Horvitz"]
To design effective vaccine policies, policymakers need detailed data about who has been vaccinated, who is holding out, and why. However, existing data in the US are insufficient: reported vaccination rates are often delayed or missing, and surveys of vaccine hesitancy are limited by high-level questions and self-report biases. Here, we show how large-scale search engine logs and machine learning can be leveraged to fill these gaps and provide novel insights about vaccine intentions and behaviors. First, we develop a vaccine intent classifier that can accurately detect when a user is seeking the COVID-19 vaccine on search. Our classifier demonstrates strong agreement with CDC vaccination rates, with correlations above 0.86, and estimates vaccine intent rates to the level of ZIP codes in real time, allowing us to pinpoint more granular trends in vaccine seeking across regions, demographics, and time. To investigate vaccine hesitancy, we use our classifier to identify two groups, vaccine early adopters and vaccine holdouts. We find that holdouts, compared to early adopters matched on covariates, are 69% more likely to click on untrusted news sites. Furthermore, we organize 25,000 vaccine-related URLs into a hierarchical ontology of vaccine concerns, and we find that holdouts are far more concerned about vaccine requirements, vaccine development and approval, and vaccine myths, and even within holdouts, concerns vary significantly across demographic groups. Finally, we explore the temporal dynamics of vaccine concerns and vaccine seeking, and find that key indicators emerge when individuals convert from holding out to preparing to accept the vaccine.
["COVID-19", "vaccination", "health behaviors", "misinformation", "search logs", "graph machine learning"]
ABSTRACTTo design effective vaccine policies, policymakers need detaileddata about who has been vaccinated, who is holding out, and why.However, existing data in the US are insufficient: reported vacci-nation rates are often delayed or missing, and surveys of vaccinehesitancy are limited by high-level questions and self-report biases.Here, we show how large-scale search engine logs and machinelearning can be leveraged to fill these gaps and provide novel in-sights about vaccine intentions and behaviors. First, we developavaccine intent classifier that can accurately detect when a useris seeking the COVID-19 vaccine on search. Our classifier demon-strates strong agreement with CDC vaccination rates, with corre-lations above 0.86, and estimates vaccine intent rates to the levelof ZIP codes in real time, allowing us to pinpoint more granulartrends in vaccine seeking across regions, demographics, and time.To investigate vaccine hesitancy, we use our classifier to identifytwo groups, vaccine early adopters andvaccine holdouts . We findthat holdouts, compared to early adopters matched on covariates,are 69% more likely to click on untrusted news sites. Furthermore,we organize 25,000 vaccine-related URLs into a hierarchical ontol-ogy of vaccine concerns, and we find that holdouts are far moreconcerned about vaccine requirements, vaccine development andapproval, and vaccine myths, and even within holdouts, concernsvary significantly across demographic groups. Finally, we explorethe temporal dynamics of vaccine concerns and vaccine seeking,and find that key indicators emerge when individuals convert fromholding out to preparing to accept the vaccine.KEYWORDSCOVID-19, vaccination, search logs, graph machine learningACM Reference Format:Serina Chang†, Adam Fourney, and Eric Horvitz. 2023. Accurate Measuresof Vaccination and Concerns of Vaccine Holdouts from Web Search Logs.InepiDAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop onEpidemiology meets Data Mining and Knowledge Discovery, August 7, 2023,Long Beach, CA, USA. ACM, New York, NY, USA, 19 pages.1 INTRODUCTIONCOVID-19 vaccines provide significant protection against severecases of SARS-CoV-2 [ 46,59], yet a large portion of the United†Research performed during an internship at Microsoft.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA©2023 Copyright held by the owner/author(s).States remains unvaccinated. Effective vaccine policies—for exam-ple, where to place vaccine sites [ 49,74], how to communicateabout the vaccine [ 18,72], and how to design campaigns to reachunvaccinated populations [ 5,22,60]—rely on detailed data aboutwho is seeking vaccination, who is holding out, and why. However,existing data are insufficient [ 43]. Reported vaccination rates are fre-quently delayed [ 2], missing at the county-level and below [ 70], andmissing essential demographic data [ 33,42]. Surveys provide a start-ing point for understanding vaccine hesitancy but are often limitedby high-level questions [ 16], small or biased samples [ 13,71], andself-reporting biases (e.g., recall or social desirability bias) [ 3,66]especially in sensitive contexts such as vaccination [36].Here, we demonstrate how large-scale search logs from Bingand machine learning (ML) can be leveraged to fill these gaps, en-abling fine-grained estimation of vaccine rates and discovering theconcerns of vaccine holdouts from their search interests. Whilesearch logs are powerful, with widespread coverage, real-time sig-nals, and access to personal interests, the vast amounts of data theyprovide are unlabeled and unstructured, consisting of billions ofnatural language queries and clicks on search results. To derivemeaning from these queries and clicks, we first impose structure byconstructing query-click graphs , which encode aggregated query-click patterns as bipartite networks. Second, using a combinationof semi-supervised graph ML techniques and manual annotation,we develop two computational resources that enable us to extractvaccine behaviors from large unlabeled search logs.First, we develop a vaccine intent classifier that can accuratelydetect when a user is seeking the COVID-19 vaccine on search. Ourclassifier achieves areas under the receiver operating characteristiccurve (AUCs) above 0.90 on held-out vaccine intent labels in allstates, and demonstrates strong agreement with CDC vaccinationrates across states ( r=0.86) and over time ( r=0.89). Using ourclassifier, we can estimate vaccine intent rates to the level of ZIPcode tabulation areas (ZCTAs), approximately 10x the granularityof counties and preceding lags in reporting. We carefully correct forbias in our estimates from non-uniform Bing coverage, and demon-strate minimal additional bias from our classifier, as it achievesequivalent true and false positive rates across regions.Second, we construct a novel ontology of COVID-19 vaccine con-cerns on search. Our ontology consists of 25,000 vaccine-relatedURLs, clicked on by Bing users, that we organize into a hierarchy ofvaccine concerns from eight top categories to 36 subcategories to156 low-level URL clusters. Unlike surveys, our ontology discoversthese concerns directly from users’ expressed interests and exploresthem at multiple scales. Furthermore, by measuring individuals’interest in each concern from their clicks, we capture revealed pref-erences, side-stepping potential biases in self-reporting [24, 66].1epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. HorvitzCombining our ontology with the vaccine intent classifier al-lows us to conduct a thorough analysis of how individuals’ vaccineconcerns relate to whether they decide to seek the vaccine. Weuse our classifier to identify two groups of users—vaccine earlyadopters and vaccine holdouts—and compare their search behav-iors. We identify significant differences in their vaccine concernsand news consumption; for example, compared to early adoptersmatched on covariates, vaccine holdouts are 69% more likely to clickon untrusted news sites. We find that vaccine concerns also differsignificantly even within holdouts, varying across demographicgroups. Finally, we analyze the temporal dynamics of vaccine con-cerns and vaccine seeking, and discover that individuals exhibittelltale shifts in vaccine concerns when they eventually convertfrom holding out to preparing to accept the vaccine.Our contributions can be summarized as follows:(1)A novel vaccine intent classifier, developed with graph MLand human annotation, that achieves AUCs above 0.9 on allstates and strong agreement with CDC vaccination rates;(2)Bias-corrected estimates of vaccine intent rates from ourclassifier, including estimates for over 20,000 ZCTAs;(3)A hierarchical ontology of COVID-19 vaccine concerns, in-cluding 25,000 URLs clicked on by Bing users, 156 URL clus-ters, 36 subcategories, and eight top categories;(4)Analyses of vaccine holdouts’ search concerns and newsconsumption, comparing to early adopters and studyingdynamics over time.We are publicly releasing our code, vaccine estimates, and ontol-ogy.1We hope that our resources, methods, and analyses can pro-vide researchers and public health agencies with valuable insightsabout vaccine behaviors, helping to guide more effective, data-driven interventions.2 DATAOur work uses a variety of datasets, including Bing search logs,CDC vaccination rates, US Census data, and Newsguard labels(Figure 1). Bing is the second largest search engine worldwide andin the US, with a US market share of around 6% on all platforms andaround 11% on desktop [ 65]. Despite having non-uniform coverageacross the US, Bing has enough penetration in the US that we canestimate representative samples after applying inverse proportionalweighting (Section 4). The Bing data we use consist of individualqueries made by users, where for each query, we have informationincluding the text of the query, an anonymized ID of the user, thetimestamp, the estimated geolocation (ZIP code, county, and state),and the set of URLs clicked on, if any. Since our work is motivatedby insufficient vaccine data and vaccine concerns in the US, we limitour study to search logs in the US market. However, the methods weintroduce could be extended to study vaccination rates and vaccineconcerns in other languages and countries. We apply our vaccineintent classifier (Section 3) to all Bing search logs in the US fromFebruary 1 to August 31, 2021.21https://github.com/microsoft/vaccine_search_study.2February 2021 was the earliest that we could study following data protection guide-lines, which allow us to store and analyze search logs up to 18 months in the past.We end in August 2021, since the FDA approved booster shots in September and ourmethod is not designed to disambiguate between vaccine seeking for the primaryseries versus boosters.Bing search logsOntology of vaccine concernsVaccine intent estimatesZIP , county, stateVaccine concerns of holdouts vs. early adoptersMatched vaccine holdouts and early adoptersNews consumption of holdouts vs. early adoptersDemographic trends in vaccine intentNewsguardlabelsCDC vaccination ratesGoogle search trendsUS Census dataVal.Val.Methods: community detection on graphs, manual annotationMethods: PageRank, GNNs, manual annotation, bias correctionExternal dataOur workLegendVal.:validationFigure 1: Our work integrates a variety of datasets and meth-ods to analyze vaccine behaviors from search logs.To evaluate our vaccine intent classifier, we compare it to vacci-nation rates reported by the CDC (Section 4). The CDC providesdaily vaccination rates at the levels of states [ 27] and counties [ 26].CDC data are essential but limited, with a substantial portion ofcounty-level data missing. These limitations serve as one of themotivations of our work, since we hope that our vaccine intent clas-sifier can serve as a complementary resource to monitor vaccinationrates, especially in smaller regions. To characterize demographictrends in vaccine intent, we use data from the US Census’ 20205-year American Community Survey [ 15]. To capture political lean,we use county-level data from the 2020 US presidential election [ 53].To quantify the trustworthiness of different news sites, we use labelsfrom Newsguard [ 52]. Finally, to evaluate the representativenessof Bing search trends, we compare them to Google search trends,which are publicly available online [34].Data ethics. Our work was approved by the Microsoft IRB officeand by an internal privacy review process which included officersfrom both Microsoft Research and the Bing product team. When weuse search logs, we are mindful of the need to balance privacy andsocial benefits when using potentially sensitive user data. Whilewe study individual search logs, since we need to be able to link in-dividual vaccine outcomes (as predicted by our classifier) to searchinterests, those sessions are assembled using only anonymous useridentifiers, which are disassociated from any specific user accountsor user profiles, and cannot be linked to any other Microsoft prod-ucts. Likewise, in this anonymous view of the logs, location anddemographic data were limited to ZIP code-level accuracy. Finally,we are careful to only report results aggregated over thousands ofindividuals. Aside from Bing search logs, all of the data sources weuse are publicly available and aggregated over many individuals.3 VACCINE INTENT CLASSIFIEROur first goal is to develop a classifier that can accurately detectwhen a search user is expressing vaccine intent, i.e., trying to getthe COVID-19 vaccine (e.g., book an appointment or find a loca-tion). Detecting vaccine intent requires precision: for example, if2Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAu1u2q1q2q3q4u1u2u3.........Step 1: URL candidatesPersonalized PageRankStep 2: AnnotationAmazon Mechanical Turkq1q2q3q4u1u2u3......Step 3: URL expansionGraph neural networkGiven that a person clicked on this page during a search session, how sure are you that this person is seeking to get the COVID-19 vaccine?Figure 2: Our pipeline of methods to identify a large, high-precision set of vaccine intent URLs.a user issues the query [covid vaccine], they may be trying to getthe vaccine, but they could also be generally curious about vaccineinformation or eligibility. Thus, we begin by defining a set of regu-lar expressions that allow us to identify vaccine intent queries, i.e.,queries that unambiguously express vaccine intent. To be included,the query must include both a COVID-19 term (“covid” or “coro-navirus”) and a vaccine term (“vaccin”, “vax”, “johnson”, etc.). Inaddition, the query must satisfy at least one of the following criteria:(1) matching some variant of “find me a COVID-19 vaccine”, (2)containing appointment-related words or location-seeking words,(3) containing a pharmacy name.However, in addition to maintaining high precision, we seek todetect as many users as possible who have expressed vaccine intent,so that we have sufficient statistical power for our downstreamanalyses. Since our search logs contain both queries and clicks, welose the opportunity to detect many more users if we only detectvaccine intent based on queries. For example, a user may issue theambiguous query [covid vaccine], but then click on the URL forthe CVS COVID-19 vaccine registration page, thus clarifying theirintent through their clicks [ 61]. The challenge with URLs is thatthey are less formulaic than queries, so we cannot easily defineregular expressions to identify URLs expressing vaccine intent.Our key insight is that, while we cannot use regular expressionsto identify URLs, we can use them to identify vaccine intent queriesand then use those queries to identify URLs, based on commonquery-click patterns. For example, vaccine intent queries such as[cvs covid vaccine] or [covid vaccine near me] may result in clickson the CVS COVID-19 vaccine registration page. To capture thesepatterns, we construct query-click graphs [20,45], which are bipar-tite networks between queries and URLs where an edge from aquery to a URL indicates how often this query is followed by a clickon this URL. Specifically, we construct a query-click graph per USstate, aggregating over queries and clicks from two representativemonths in our study period (April and August 2021). Then, ourpipeline proceeds in three steps (Figure 2): first, we use personal-ized PageRank to propagate labels from queries to URLs, so that wecan generate a set of URL candidates (Section 3.1); next, we presentthe URL candidates to annotators on Amazon Mechanical Turk tolabel as vaccine intent or not (Section 3.2); finally, we use thoselabels to train graph neural networks (GNNs) so that we can furtherexpand our set of vaccine intent URLs (Section 3.3).State URLCA https://myturn.ca.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.goodrx.com/covid-19/walgreenshttps://www.costco.com/covid-vaccine.htmlhttps://www.walgreens.com/topic/promotion/covid-vaccine.jspNY https://covid19vaccine.health.ny.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://vaccinefinder.nyc.gov/https://www.goodrx.com/covid-19/walgreensTX https://www.cvs.com/immunizations/covid-19-vaccinehttps://vaccine.heb.com/https://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://corporate.walmart.com/covid-vaccinehttps://dshs.texas.gov/covidvaccine/FL https://www.publix.com/covid-vaccinehttps://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://floridahealthcovid19.gov/vaccines/https://www.goodrx.com/covid-19/walgreensTable 1: Top 5 URLs from Personalized PageRank (S-PPR) forthe four largest states in the US.3.1 Personalized PageRank for URL candidatesPersonalized PageRank [ 14] is a common technique for seed expan-sion, where a set of seed nodes in a graph are identified as membersof a community, and one wishes to expand from that set to identifymore community members [ 40]. In our case, the vaccine intentqueries act as our seed set, and our goal is to spread the influencefrom the seed set over the rest of the query-click graph. Given aseed setS, personalized PageRank derives a score for each node inthe graph that represents the probability of landing on that nodewhen running random walks from S.We run personalized PageRank from the seed set of vaccineintent queries (S-PRR) to derive scores for all URLs in each query-click graph. Then, we order the URLs from each state according totheir S-PPR ranking and keep the union over states of their top 100URLs as our set of URL candidates, resulting in 2,483 candidates.The number of URLs we have in the union is much lower than thenumber of states multiplied by 100, since there is overlap betweenstates. However, there is also substantial heterogeneity in top URLsacross states, reflecting state-specific vaccine programs and policies(Table 1). By constructing separate graphs and running S-PPR perstate, our approach is uniquely able to capture this state-specificheterogeneity. In supplementary experiments, we show that an al-ternative approach that uses a combined graph over states severelyhurts performance for small states (Section A2.2).S-PPR also provides scores for all queries in the graph, but wefound that the seed set was comprehensive in identifying vaccineintent queries. The top-ranked queries that were not in the seed settended to be location-specific, such as [covid vaccine new york],which is suggestive of vaccine intent but not unambiguous enough.Thus, in the subsequent steps of annotation and GNN expansion,we only seek to add URLs, and consider regular expressions suffi-cient for identifying queries. However, we also selected a sample3epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzof regular expression-detected queries to present to annotators, tovalidate whether they were truly vaccine intent. To capture a di-verse sample, we use the union over the top 5 and bottom 5 queriesper state (ranked by S-PPR), after filtering out queries that wereissued by fewer than 50 users, resulting in 227 queries to label.3.2 Annotation on Amazon Mechanical TurkIn this step, we present our URL candidates (and sampled queries)to annotators on AMT. For each URL, we first present it to threeannotators. If all three give it a positive label (i.e., Highly Likely orLikely), then we label this URL as vaccine intent. If two give it apositive label and one does not, we assign it to one more annotator,and label it as vaccine intent if that annotator gives a positive label.In other words, we require vaccine intent URLs to receive threepositive annotations. With this relatively strict bar, we still find thata large majority (86%) of our URL candidates are labeled as vaccineintent. Furthermore, we observe a clear relationship between S-PPRrank and the percentage labeled as vaccine intent: for example,around 90% of URLs from ranks 0 to 20, around 81% of URLs fromranks 40-60, and around 71% of URLs from ranks 80 to 100 (FigureA2). We also find a very high positive rate (96%) among the queriesthat we tested, thus validating our regular expressions.3.3 Graph neural networks for expansionSince manual annotation is expensive, we wish to augment ourefforts by training ML models on the AMT labels, then use themodels to expand our set of vaccine intent URLs. We formulate thisproblem as semi-supervised node classification on a graph, sincethe URLs are nodes in the query-click graph and we are trying topredict whether a URL indicates vaccine intent or not, given labelsfor a subset of URLs. In this section, we provide an overview of ourmodeling procedure, with details in Section A1.GNN architecture and training. To solve this problem, we designa GNN [ 39] that consists of character-level convolutions (CNN)and graph convolutions. We use the CNNs to capture textual infor-mation in the queries and URLs, since text can be informative forthis problem (e.g., the appearance of “vaccine”). The graph convo-lutions allow us to learn representations of URLs that draw fromthe representations of their neighboring queries, which draw fromthe representations of their neighboring URLs, and so on. In thisway, we can capture “similar” URLs in embedding space (similar interms of both text and graph structure).To train and test our model, we randomly split the URL labelsinto a train set (60%), validation set (15%), and test set (25%). How-ever, some states have much smaller graphs, and therefore, fewerpositive and negative labels. For example, for Wyoming, we onlyhave 245 positive and 276 negative URLs. We find that with suchfew labels, the model cannot adequately learn how to predict vac-cine intent, with AUCs far below those of large states (Table A1). Toaddress this issue, we pre-train the model on S-PPR rankings, whichrequires no additional supervision. Our intuition is that S-PPR al-ready performed remarkably well at predicting vaccine intent, aswe discussed in the prior section. Furthermore, S-PPR rankings donot require any manual labels; we derive them entirely from ourinitial vaccine intent queries, which were automatically labeledusing regular expressions. This pre-training encourages the modelto learn URL representations that are predictive of S-PPR rankings,which we find help substantially with predicting vaccine intent.Evaluating GNN performance. We evaluate model performanceby computing its AUC on the held-out test set. Furthermore, toaccount for randomness from model training and data splitting,we run 10 random trials for every model/state, where in each trial,we re-split the URL labels, retrain the model on the train set, andre-evaluate the model’s performance on the test set. First, we findthat pre-training significantly improves performance for the smallerstates; for example, the mean AUC for Wyoming increases from 0.74to 0.95 (Figure 3a, Table A1). We find that pre-training seems un-necessary for the larger states, such as Connecticut and Tennesssee,where we are already achieving high AUCs above 0.98. After in-corporating pre-training for smaller states (fewer than 5,000,000nodes), we are able to achieve AUCs above 0.90 for all 50 states andabove 0.95 for 45 states (Figure 3b).Discovering new vaccine intent URLs. Finally, we use our trainedGNNs to identify new vaccine intent URLs. In order to decide whichnew URLs to include, we need a score threshold. Our goal is to setthe threshold such that any URL that scores above it is very likelyto truly be vaccine intent (i.e., we want to maintain high precision).Borrowing the idea of “spies” from positive-unlabeled learning [ 8],our idea is to use the held-out positive URLs in the test set todetermine where to set the threshold. We consider two thresholds:(1)tmed, the median score of the held-out positive URLs, and (2)tprec, the minimum threshold required to achieve precision of atleast 0.9 on the held-out test set. Then, we only include URLs thatpass both thresholds in at least 6 out of the 10 random trials. Evenwith this strict threshold, we discover around 11,400 new URLs(Table A2), increasing our number of vaccine intent URLs by 10x. Inthe following section, we also evaluate the impact of adding theseURLs on our ability to estimate regional vaccine intent rates. Wefind that the new URLs not only increase our coverage of vaccineintent users by 1.5x but also further improve our agreement withreported vaccination rates from the CDC (Table 2).4 ESTIMATING VACCINE INTENT RATESUsing our classifier, we can estimate regional rates of vaccine intent.In this section, we discuss how we correct for bias in our estimates,validate against CDC vaccination rates, and use our estimates toderive insights about fine-grained vaccination trends.Bias evaluation. In Section A2, we decompose potential bias inour approach into two key sources: first, bias from non-uniformBing coverage, and second, bias from non-uniform true positiverates (TPR) and false positive rates (FPR) of our classifier. We showthat, if we can correct for non-uniform Bing coverage and showthat our classifier’s TPRs and FPRs do not significantly differ acrossregions, our vaccine intent estimates should, theoretically, formunbiased estimates of true vaccination rates. We evaluate our clas-sifier’s TPRs and FPRs on held-out vaccine intent labels, using thesame score threshold we used for discovering new vaccine intentURLs. We find that our classifier does indeed achieve statisticallyequivalent TPRs and FPRs across states (Figure 3b), suggesting thatour classifier contributes minimal additional bias. We discuss belowhow we correct for non-uniform Bing coverage. Additionally, to4Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(b)Results across all statesWith pre-trainingWithout pre-trainingWith pre-training for smaller statesWyomingArea under ROC curveTrue positive rateFalse positive rateTrue positive rateFalse positive rateFalse positive rate# nodes in state graph# nodes in state graph# nodes in state graphFigure 3: (a) GNN results with and without pre-training for Wyoming, one of the smallest states. Each line represents one of 10random trials. (b) Final GNN results for all 50 states, with pre-training for smaller states. Each dot represents a state, with itsy-coordinate representing the mean metric over 10 trials and grey bars indicating standard deviation.Pipeline step CDC corr. # vaccine intent usersOnly queries 0.62 3.18M+manual URLs 0.80 4.95M+manual and GNN URLs 0.86 7.45MTable 2: Each step of our classification pipeline (Section 3)improves both our correlation with CDC vaccination ratesand our coverage of vaccine intent users.evaluate the representativeness of Bing data, we compare searchtrends for vaccine intent queries between Google and Bing and findthat, even before applying corrections to Bing data, the trends arehighly correlated (Figure A4).Estimating coverage-corrected rates. When we apply our classifierto Bing search logs from Feburary 1 to August 31, 2021, we find 7.45million “active” Bing users who expressed vaccine intent throughtheir queries or clicks. We focus on active Bing users, i.e., thosewho issued at least 30 queries in a month, since we can reliablyassign them to a location based on their mode ZIP code (or countyor state) from those queries. Given a ZCTA z, we compute N(ˆv,z),the number of active Bing users from zfor whom we detect vaccineintent. Furthermore, we estimate the ZCTA’s Bing coverage asN(b,z)N(z), whereN(b,z)is its average number of active Bing usersover the months in our study period and N(z)is its population sizefrom the 2020 5-year American Community Survey [ 15]. Then, ourcoverage-corrected vaccine intent estimate ̃p(v,z)for ZCTAzis ̃p(v,z)=N(ˆv,z)N(z)N(b,z)N(z)=N(ˆv,z)N(b,z).To estimate the vaccine intent rate for a set Zof ZCTAs, e.g., a stateor county, we simply take the population-weighted average.Comparison to CDC vaccination data. When we compare ourvaccine intent estimates to state-level vaccination rates from theCDC, we observe strong correlation ( r=0.86) on cumulative ratesat the end of August 2021 (Figure 4). Notably, we find that the cor-relation drops to r=0.79if we do not correct for Bing coveragein our estimates. Furthermore, we find that each step of our clas-sification pipeline—only using queries from regular expressions,Proportion of state users with vaccine intentProportion of state population vaccinated (CDC)Figure 4: Comparing CDC state vaccination rates vs. esti-mated vaccine intent rates from Bing search logs.Lag from vaccine intent to CDC reportingProp. state populationgetting first doseProp. state usersshowing first vaccine intentVaccine intent from search logsVaccination data from CDCFigure 5: Rates over time of first vaccine intent (top) vs. firstdose from CDC (bottom) for the four largest states in the US.incorporating manually annotated URLs from personalized PageR-ank and AMT, incorporating URLs found by GNNs—improves bothour correlation with CDC rates and the number of users we are able5epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz(a)(b)(c)United StatesEstimated vaccine intent rate per ZCTAProportion of users with vaccine intentNew York City Estimated vaccine intent rate per ZCTAManhattanStaten IslandBronxQueensBrooklynUS correlation between ZCTA vaccine intent and demographicFigure 6: (a) Using our classifier, we can estimate vaccine intent rates per ZCTA, approximately 10x the granularity of counties.(b) Zooming in on New York City shows that estimated vaccine intent rates vary substantially across ZCTAs, even within thesame city or county. (c) Correlations between ZCTA vaccine intent rates and demographic variables.to identify (Table 2). Notably, if we only use queries, the correlationdrops tor=0.62and we lose 57% of the users we identified withour full classifier, demonstrating the value of adding vaccine intentURLs through our graph ML framework.Additionally, we compare our vaccine intent estimates to theCDC’s vaccination rates over time. We observe strong correlationshere as well, especially if we allow the CDC time series to lag behindthe vaccine intent time series (Figure 5). With lags of 7-15 days(IQR), the median correlation over states reaches r=0.89; withouta lag, the median correlation drops to r=0.78. The CDC’s lagdemonstrates an advantage of our classifier, as it can detect vaccineseeking in real time without delays from reporting.Granular trends in vaccine seeking. Our vaccine intent classifierallows us to pinpoint who was seeking the COVID-19 vaccine,where, and when. We estimate cumulative vaccine intent rates upto the end of August 2021 at the level of ZCTAs (Figure 6a), approx-imately 10x the granularity of counties, which is the finest-grainedvaccination data the CDC provides and, still, with many countiesmissing or having incomplete data [ 70]. We observe substantialheterogeneity in vaccine intent at the ZCTA-level, even within thesame states and counties. For example, when we focus on New YorkCity, we see that Manhattan and Queens have higher vaccine intentrates, and within Queens, ZCTAs in the northern half have higherrates (Figure 6b), aligning with reported local vaccination rates inNew York City [11].We can also use our estimates to characterize demographic trendsin vaccination. When we measure correlations between ZCTA vac-cine intent rate and different demographic variables, we find thatoverall demographic trends from our estimates align closely withprior literature [ 37,41,71,76]. For example, we observe strongpositive correlations with education, income, and population den-sity, and a strong negative correlation with percent Republican(Figure 6c). However, we discover more nuanced trends when welook closer. Demographic trends vary significantly across states(Figure A5), especially for race and ethnicity, and trends changeover time. For example, we estimate that older ZCTAs were muchlikelier to seek the vaccine early in 2021 but this trend fell over time(Figure A6a), reflecting how the US vaccine rollout initially priori-tized seniors [ 38], and we see an increase in vaccine intent frommore Republican ZCTAs in summer 2021 (Figure A6b). Thus, ourclassifier both confirms existing findings and enables new analyseswith finer granularity across regions, demographics, and time.5 SEARCH CONCERNS OF HOLDOUTSWe use our vaccine intent classifier to identify two groups: vaccineearly adopters , who expressed their first vaccine intent before May2021, and vaccine holdouts , who waited until July 2021 to show theirfirst vaccine intent, despite becoming eligible by April.3Comparingthe search interests of these two groups allows us to discover rela-tionships between expressed vaccine concerns, news consumption,and vaccine decision-making. To reduce potential confounding, wematch each holdout with a unique early adopter from the samecounty and with a similar average query count, since we knowthat the populations seeking vaccination changed over time andwe do not want our comparisons to be overpowered by regional ordemographic differences. In our following analyses, we comparethe search interests of the matched sets, with over 200,000 pairs.Vaccine holdouts are more likely to consume untrusted news. First,we analyze the trustworthiness of news sites clicked on by vaccineholdouts versus early adopters. We use ratings from Newsguard,which assigns trust scores to news sites based on criteria suchas how often the site publishes false content and how it handlesthe difference between news and opinion [ 52]. We find that, inthe period while vaccine holdouts were eligible but still holdingout (April to June 2021), holdouts were 69% (95% CI, 67%-70%)likelier than their matched early adopters to click on untrustednews, defined by Newsguard as domains with trust scores below60. Furthermore, we see that as the trust score from Newsguarddegrades, the likelier it was that holdouts clicked on the site, relativeto early adopters (Figure 7a). For example, sites that are known forspreading COVID-19 misinformation, such as Infowars [ 25], RT [ 6],and Mercola [ 31], were much likelier to be clicked on by holdouts.3We did not consider as holdouts those who never showed vaccine intent during ourstudy period, since those users may have gotten their vaccine in ways that are notvisible via search data. In comparison, individuals who did not show their first vaccineintent until July 2021 likely did not receive the vaccine before.6Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(d)(b)(c)Religious concernsExpert anti-vaxHigh-profile anti-vaxEerie fearsExemptionAnti-mandateFake vaccine proof FDA approvalVaccine-caused deathsVaccine developmentTravel restrictionsNatural immunityReproductive healthEfficacy against variantsVaccine proofNews on hesitancyEmployee mandatesDecision-makingBreakthrough casesPost-vax guidelinesSevere side effectsSpecial populationsVaccine incentivesVaccine ratesEfficacy from studiesInfo about J&JInfo about PfizerComparing vaccinesNormal side effectsInfo about ModernaLeans towardsvaccine holdoutLeans towardsearly adopterElevated near vaccine intentReduced near vaccine intentSubcategoryFigure 7: In all subfigures, news/categories are colored from yellow to dark purple to represent most holdout-leaning to mostearly adopter-leaning. (a) The lower the trust rating from Newsguard, the likelier it is that vaccine holdouts click on the newssite, relative to early adopters. (b) Holdouts’ top category concerns include Vaccine Safety, Requirements, and Information, withvarying proportions over time. (c) Comparing holdouts vs. early adopters’ relative probabilities of clicking on each subcategory(from April to June 2021) reveals each group’s distinctive concerns. (d) Near when holdouts express vaccine intent ( ±3 days) inJuly and August 2021, their concerns become much more like the concerns of early adopters, with a few important differences.Ontology of vaccine concerns on search. To characterize vaccine-related search interests in far more detail, we construct a hier-archical ontology of vaccine concerns, defined in terms of 25,000vaccine-related URLs that were clicked on by early adopters or hold-outs. We construct our ontology from the bottom-up: first, we seekto automatically partition the URLs into clusters. Leveraging graphML again, we formulate this as a community detection problemon graphs, and apply the Louvain algorithm [ 12] to the collapsedURL-URL graph (collapsing the bipartite query-click graph overqueries). We find that this approach results in remarkably coher-ent clusters (Table A3), due to the strength of the signal containedin query-click graphs, and outperforms standard topic modelingapproaches such as LDA [ 10]. Based on these clusters, we designa comprehensive set of subcategories and top categories, and sortthe clusters accordingly. For example, we identify one cluster ofnews stories announcing vaccine passport requirements in cities,which we sort under the proof of vaccination subcategory and Vac-cine Requirements top category. This bottom-up approach allowsus to discover and measure vaccine concerns directly from users’search interests and analyze them at multiple scales, providingcomplementary insights to more traditional surveys.In Figure A1, we summarize our resulting ontology, which con-sists of 8 top categories and 36 subcategories. Some top categoriesencompass a number of distinct subcategories: for example, underVaccine Safety, we include normal side effects, severe side effects,concerns about reproductive health, vaccine history and develop-ment, FDA approval, fear of vaccine-caused deaths, and “eerie” fears(e.g., myths about vaccine shedding or becoming magnetic [ 28]).At the top category-level, we find that vaccine holdouts are, by far,the most concerned about Vaccine Safety, which accounts for 23%of their vaccine-related clicks, followed by Vaccine Information(10%) and Vaccine Requirements (9%). We also observe changesin interests over time (Figure 7b): for example, interest in VaccineIncentives increased in May 2021, and interest in Vaccine Effective-ness grew in June 2021, following the spread of the Delta variant.Distinctive concerns of holdouts vs. early adopters. Our ontologyallows us to compare the vaccine concerns of holdouts and theirmatched early adopters. First, during the period from April to June2021, we find that holdouts were 48% less likely than early adoptersto click on any vaccine-related URL. Furthermore, their distributionof concerns within their vaccine-related clicks differed significantly(Figure 7c). Using the subcategories from our ontology, we findthat holdouts were far more interested in religious concerns aboutthe vaccine; anti-vaccine messages from experts and high-profilefigures; avoiding vaccine requirements by seeking exemptions, ban-ning mandates, or obtaining fake proof of vaccination; eerie fearsand vaccine-caused deaths; and FDA approval and vaccine develop-ment. In comparison, early adopters were much more concerned7epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzabout normal side effects, vaccine efficacy, comparing differenttypes of vaccines, and information about each vaccine (Moderna,Pfizer, and Johnson & Johnson). These differences reveal the impor-tance of a fine-grained ontology; for example, at the top categorylevel, we would see that both groups were interested in VaccineSafety but miss that early adopters were more concerned about nor-mal and severe side effects, while holdouts were more concernedabout eerie fears and vaccine-caused deaths. Our approach alsoallows us to study who is expressing these concerns in greater gran-ularity. Even within holdouts, we observe significant variabilityin concerns across demographic groups (Figure A7). For example,holdouts from more Democrat-leaning ZCTAs were particularlyconcerned about FDA approval and vaccine requirements, whileholdouts from more Republican-leaning ZCTAs were more con-cerned about eerie fears and vaccine incentives.Holdouts appear like early adopters when seeking the vaccine.In our final analysis, we exploit the fact that all of our vaccineholdouts eventually expressed vaccine intent to explore how vac-cine concerns change as an individual converts from holdout toadopter. From July to August 2021, we analyze how holdouts’ vac-cine concerns change in the small window ( ±3days) surroundingtheir expressed vaccine intent, compared to their typical concernsoutside of that window. We find that in those windows, holdouts’vaccine concerns nearly reverse, such that they look much morelike early adopters than their typical selves (Figure 7d nearly re-verses 7c). During this time, holdouts become far more interestedin the Johnson & Johnson vaccine, comparing different vaccines,and vaccine incentives, and less interested in anti-vaccine messagesand vaccine fears. Notably, not all early adopter-leaning concernsreverse as dramatically; for example, even while expressing vaccineintent, holdouts remain less interested in the Pfizer and Modernavaccines, which may reflect how vaccine hesitant individuals werequicker to accept the one-shot Johnson & Johnson vaccine, insteadof the two-shot mRNA vaccines [ 21,73]. Furthermore, there aresome early adopter-leaning concerns that holdouts do not pick upon during this time, such as interest in vaccine rates. We hypoth-esize that these concerns are more reflective of an early adopter“persona” rather than of concerns that would become relevant whenseeking the vaccine, such as comparing different vaccines.6 RELATED WORKOur work centers Bing search logs, which have been used to studyother health issues such as shifts in needs and disparities in infor-mation access during the pandemic [ 67,68], health informationneeds in developing nations [ 1], experiences around cancer diag-noses [ 55,56], concerns rising during pregnancy [ 29], and medicalanxieties associated with online search [ 75]. Our efforts build onprior work that extracts insights about the COVID-19 vaccine fromdigital traces, such as social media [ 50,57,58] and aggregated searchtrends [ 7,23,48]. Our work is also related to other efforts to detecthealth conditions online, such as predicting depression from socialmedia [19] and monitoring influenza from search queries [32].Our work seeks to address the challenges of working with digitaltraces [ 24,54] and limitations of prior work [ 32,44] by developingML and human-in-the-loop methods to precisely label search logsand evaluate bias. Furthermore, as one of the first works to use indi-vidual search logs to study the COVID-19 vaccine, we have the rareopportunity to link vaccine outcomes (predicted by our classifier)to the same individual’s search interests. Our graph ML pipeline isalso similar to other “big data” approaches that, due to the scale ofunlabeled data, manually annotate a subset of data, train machinelearning models to accurately predict those labels, then use thosemodels to label the rest of the data [ 17,30,35,47]. We extend thisapproach in several ways, such as by using personalized PageRankto select URLs for more efficient annotation and by setting a strictclassification threshold based on “spies” to ensure high precision.7 DISCUSSIONWe have demonstrated how large-scale search logs and machinelearning can be leveraged for fine-grained, real-time monitoringof vaccine intent rates and identification of individuals’ concernsabout vaccines. There are limitations to our approach: for example,while we can achieve finer granularity than existing data, we stillmiss within-ZCTA heterogeneity in vaccine intent. Furthermore,our efforts to minimize bias in our estimates are substantial butimperfect (e.g., we can only approximate TPRs and FPRs of ourclassifier). We also assume in this work that vaccine intent can bedetected through single queries or clicks, but more sophisticatedmodels could incorporate entire search sessions or browsing databeyond search. However, in favor of simplicity and considerationsof privacy, we label vaccine intent at the query and click-level.Despite these limitations, our resources demonstrate strongagreement with existing data and enable analyses that have not beenavailable before. For example, our fine-grained vaccine intent esti-mates can help public health officials to identify under-vaccinatedcommunities, informing where to place vaccine sites or whom toprioritize in online or real-world outreach programs. Furthermore,our novel ontology and analyses of individuals’ vaccine concernsinform how to intervene, guiding messaging strategies for differentholdout populations. Lastly, our observation that holdouts resembleearly adopters when they eventually seek vaccination indicates thatindividuals might follow similar paths towards vaccine acceptance.Future work could model these trajectories, try to identify key in-fluences (e.g., vaccine mandates), and use these models to ideallyallocate limited resources for interventions.To facilitate policy impact and future research, we are releasingour vaccine intent estimates and our ontology of vaccine concerns.We hope that these resources will be useful for conducting detailedanalyses of COVID-19 vaccine behaviors and vaccination rates. Theontology can also be employed widely in web and social mediaresearch; for example, to study how certain classes of URLs (e.g.,eerie fears) are disseminated on social media or surfaced by searchengines. Finally, we note that our graph ML techniques for intentdetection are applicable beyond vaccines, and could be applied toprecisely detect other intents of interest, such as seeking stimuluschecks or COVID-19 tests. More broadly, we hope that our workcan serve as a roadmap for researchers of how to derive rigorousbehavioral and health insights from search logs, including how toprecisely detect user intents and interests, evaluate and correctfor bias, validate against external data, and release resources topromote reproducibility, transparency, and future work.8Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA
kTKHFRaH2I
Measuring vaccine intent using web search data
4: Good paper, accept
The main contribution of this paper is a COVID-19 vaccine-intent classifier that can potentially give an accurate measure of vaccine hesitancy in an individual by analyzing the search history. This classifier is trained on search queries and website clicks from Bing search logs and annotated using Amazon Mechanical Turk. Another contribution is an ontology of website URLs which consists of 25,000 vaccine-related URLs, organized into eight top categories to 36 subcategories to 156 low-level URL clusters. They combine this ontology with their vaccine-intent classifier and got improved performance. The classifier correlates with the CDC vaccination data in the sense that states having high vaccination rates have low vaccine-hesitancy and states having low vaccination rates have high vaccine-hesitancy. One weakness is that they cap their analysis till August 2021, since the FDA approved booster shots in September and their method cannot distinguish between vaccine seeking for the primary series versus boosters. But it still would have been interesting to see how the classifier performs beyond August 2021. Also it is not clear how this method will perform with other vaccines that are not as popular as the COVID-19 vaccine. But overall the contribution is nice and I think it should be accepted.
3: The reviewer is fairly confident that the evaluation is correct
fhxHhXTnHc
KDD.org/2023/Workshop/epiDAMIK
2023
Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs
["Serina Chang", "Adam Fourney", "Eric Horvitz"]
To design effective vaccine policies, policymakers need detailed data about who has been vaccinated, who is holding out, and why. However, existing data in the US are insufficient: reported vaccination rates are often delayed or missing, and surveys of vaccine hesitancy are limited by high-level questions and self-report biases. Here, we show how large-scale search engine logs and machine learning can be leveraged to fill these gaps and provide novel insights about vaccine intentions and behaviors. First, we develop a vaccine intent classifier that can accurately detect when a user is seeking the COVID-19 vaccine on search. Our classifier demonstrates strong agreement with CDC vaccination rates, with correlations above 0.86, and estimates vaccine intent rates to the level of ZIP codes in real time, allowing us to pinpoint more granular trends in vaccine seeking across regions, demographics, and time. To investigate vaccine hesitancy, we use our classifier to identify two groups, vaccine early adopters and vaccine holdouts. We find that holdouts, compared to early adopters matched on covariates, are 69% more likely to click on untrusted news sites. Furthermore, we organize 25,000 vaccine-related URLs into a hierarchical ontology of vaccine concerns, and we find that holdouts are far more concerned about vaccine requirements, vaccine development and approval, and vaccine myths, and even within holdouts, concerns vary significantly across demographic groups. Finally, we explore the temporal dynamics of vaccine concerns and vaccine seeking, and find that key indicators emerge when individuals convert from holding out to preparing to accept the vaccine.
["COVID-19", "vaccination", "health behaviors", "misinformation", "search logs", "graph machine learning"]
ABSTRACTTo design effective vaccine policies, policymakers need detaileddata about who has been vaccinated, who is holding out, and why.However, existing data in the US are insufficient: reported vacci-nation rates are often delayed or missing, and surveys of vaccinehesitancy are limited by high-level questions and self-report biases.Here, we show how large-scale search engine logs and machinelearning can be leveraged to fill these gaps and provide novel in-sights about vaccine intentions and behaviors. First, we developavaccine intent classifier that can accurately detect when a useris seeking the COVID-19 vaccine on search. Our classifier demon-strates strong agreement with CDC vaccination rates, with corre-lations above 0.86, and estimates vaccine intent rates to the levelof ZIP codes in real time, allowing us to pinpoint more granulartrends in vaccine seeking across regions, demographics, and time.To investigate vaccine hesitancy, we use our classifier to identifytwo groups, vaccine early adopters andvaccine holdouts . We findthat holdouts, compared to early adopters matched on covariates,are 69% more likely to click on untrusted news sites. Furthermore,we organize 25,000 vaccine-related URLs into a hierarchical ontol-ogy of vaccine concerns, and we find that holdouts are far moreconcerned about vaccine requirements, vaccine development andapproval, and vaccine myths, and even within holdouts, concernsvary significantly across demographic groups. Finally, we explorethe temporal dynamics of vaccine concerns and vaccine seeking,and find that key indicators emerge when individuals convert fromholding out to preparing to accept the vaccine.KEYWORDSCOVID-19, vaccination, search logs, graph machine learningACM Reference Format:Serina Chang†, Adam Fourney, and Eric Horvitz. 2023. Accurate Measuresof Vaccination and Concerns of Vaccine Holdouts from Web Search Logs.InepiDAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop onEpidemiology meets Data Mining and Knowledge Discovery, August 7, 2023,Long Beach, CA, USA. ACM, New York, NY, USA, 19 pages.1 INTRODUCTIONCOVID-19 vaccines provide significant protection against severecases of SARS-CoV-2 [ 46,59], yet a large portion of the United†Research performed during an internship at Microsoft.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA©2023 Copyright held by the owner/author(s).States remains unvaccinated. Effective vaccine policies—for exam-ple, where to place vaccine sites [ 49,74], how to communicateabout the vaccine [ 18,72], and how to design campaigns to reachunvaccinated populations [ 5,22,60]—rely on detailed data aboutwho is seeking vaccination, who is holding out, and why. However,existing data are insufficient [ 43]. Reported vaccination rates are fre-quently delayed [ 2], missing at the county-level and below [ 70], andmissing essential demographic data [ 33,42]. Surveys provide a start-ing point for understanding vaccine hesitancy but are often limitedby high-level questions [ 16], small or biased samples [ 13,71], andself-reporting biases (e.g., recall or social desirability bias) [ 3,66]especially in sensitive contexts such as vaccination [36].Here, we demonstrate how large-scale search logs from Bingand machine learning (ML) can be leveraged to fill these gaps, en-abling fine-grained estimation of vaccine rates and discovering theconcerns of vaccine holdouts from their search interests. Whilesearch logs are powerful, with widespread coverage, real-time sig-nals, and access to personal interests, the vast amounts of data theyprovide are unlabeled and unstructured, consisting of billions ofnatural language queries and clicks on search results. To derivemeaning from these queries and clicks, we first impose structure byconstructing query-click graphs , which encode aggregated query-click patterns as bipartite networks. Second, using a combinationof semi-supervised graph ML techniques and manual annotation,we develop two computational resources that enable us to extractvaccine behaviors from large unlabeled search logs.First, we develop a vaccine intent classifier that can accuratelydetect when a user is seeking the COVID-19 vaccine on search. Ourclassifier achieves areas under the receiver operating characteristiccurve (AUCs) above 0.90 on held-out vaccine intent labels in allstates, and demonstrates strong agreement with CDC vaccinationrates across states ( r=0.86) and over time ( r=0.89). Using ourclassifier, we can estimate vaccine intent rates to the level of ZIPcode tabulation areas (ZCTAs), approximately 10x the granularityof counties and preceding lags in reporting. We carefully correct forbias in our estimates from non-uniform Bing coverage, and demon-strate minimal additional bias from our classifier, as it achievesequivalent true and false positive rates across regions.Second, we construct a novel ontology of COVID-19 vaccine con-cerns on search. Our ontology consists of 25,000 vaccine-relatedURLs, clicked on by Bing users, that we organize into a hierarchy ofvaccine concerns from eight top categories to 36 subcategories to156 low-level URL clusters. Unlike surveys, our ontology discoversthese concerns directly from users’ expressed interests and exploresthem at multiple scales. Furthermore, by measuring individuals’interest in each concern from their clicks, we capture revealed pref-erences, side-stepping potential biases in self-reporting [24, 66].1epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. HorvitzCombining our ontology with the vaccine intent classifier al-lows us to conduct a thorough analysis of how individuals’ vaccineconcerns relate to whether they decide to seek the vaccine. Weuse our classifier to identify two groups of users—vaccine earlyadopters and vaccine holdouts—and compare their search behav-iors. We identify significant differences in their vaccine concernsand news consumption; for example, compared to early adoptersmatched on covariates, vaccine holdouts are 69% more likely to clickon untrusted news sites. We find that vaccine concerns also differsignificantly even within holdouts, varying across demographicgroups. Finally, we analyze the temporal dynamics of vaccine con-cerns and vaccine seeking, and discover that individuals exhibittelltale shifts in vaccine concerns when they eventually convertfrom holding out to preparing to accept the vaccine.Our contributions can be summarized as follows:(1)A novel vaccine intent classifier, developed with graph MLand human annotation, that achieves AUCs above 0.9 on allstates and strong agreement with CDC vaccination rates;(2)Bias-corrected estimates of vaccine intent rates from ourclassifier, including estimates for over 20,000 ZCTAs;(3)A hierarchical ontology of COVID-19 vaccine concerns, in-cluding 25,000 URLs clicked on by Bing users, 156 URL clus-ters, 36 subcategories, and eight top categories;(4)Analyses of vaccine holdouts’ search concerns and newsconsumption, comparing to early adopters and studyingdynamics over time.We are publicly releasing our code, vaccine estimates, and ontol-ogy.1We hope that our resources, methods, and analyses can pro-vide researchers and public health agencies with valuable insightsabout vaccine behaviors, helping to guide more effective, data-driven interventions.2 DATAOur work uses a variety of datasets, including Bing search logs,CDC vaccination rates, US Census data, and Newsguard labels(Figure 1). Bing is the second largest search engine worldwide andin the US, with a US market share of around 6% on all platforms andaround 11% on desktop [ 65]. Despite having non-uniform coverageacross the US, Bing has enough penetration in the US that we canestimate representative samples after applying inverse proportionalweighting (Section 4). The Bing data we use consist of individualqueries made by users, where for each query, we have informationincluding the text of the query, an anonymized ID of the user, thetimestamp, the estimated geolocation (ZIP code, county, and state),and the set of URLs clicked on, if any. Since our work is motivatedby insufficient vaccine data and vaccine concerns in the US, we limitour study to search logs in the US market. However, the methods weintroduce could be extended to study vaccination rates and vaccineconcerns in other languages and countries. We apply our vaccineintent classifier (Section 3) to all Bing search logs in the US fromFebruary 1 to August 31, 2021.21https://github.com/microsoft/vaccine_search_study.2February 2021 was the earliest that we could study following data protection guide-lines, which allow us to store and analyze search logs up to 18 months in the past.We end in August 2021, since the FDA approved booster shots in September and ourmethod is not designed to disambiguate between vaccine seeking for the primaryseries versus boosters.Bing search logsOntology of vaccine concernsVaccine intent estimatesZIP , county, stateVaccine concerns of holdouts vs. early adoptersMatched vaccine holdouts and early adoptersNews consumption of holdouts vs. early adoptersDemographic trends in vaccine intentNewsguardlabelsCDC vaccination ratesGoogle search trendsUS Census dataVal.Val.Methods: community detection on graphs, manual annotationMethods: PageRank, GNNs, manual annotation, bias correctionExternal dataOur workLegendVal.:validationFigure 1: Our work integrates a variety of datasets and meth-ods to analyze vaccine behaviors from search logs.To evaluate our vaccine intent classifier, we compare it to vacci-nation rates reported by the CDC (Section 4). The CDC providesdaily vaccination rates at the levels of states [ 27] and counties [ 26].CDC data are essential but limited, with a substantial portion ofcounty-level data missing. These limitations serve as one of themotivations of our work, since we hope that our vaccine intent clas-sifier can serve as a complementary resource to monitor vaccinationrates, especially in smaller regions. To characterize demographictrends in vaccine intent, we use data from the US Census’ 20205-year American Community Survey [ 15]. To capture political lean,we use county-level data from the 2020 US presidential election [ 53].To quantify the trustworthiness of different news sites, we use labelsfrom Newsguard [ 52]. Finally, to evaluate the representativenessof Bing search trends, we compare them to Google search trends,which are publicly available online [34].Data ethics. Our work was approved by the Microsoft IRB officeand by an internal privacy review process which included officersfrom both Microsoft Research and the Bing product team. When weuse search logs, we are mindful of the need to balance privacy andsocial benefits when using potentially sensitive user data. Whilewe study individual search logs, since we need to be able to link in-dividual vaccine outcomes (as predicted by our classifier) to searchinterests, those sessions are assembled using only anonymous useridentifiers, which are disassociated from any specific user accountsor user profiles, and cannot be linked to any other Microsoft prod-ucts. Likewise, in this anonymous view of the logs, location anddemographic data were limited to ZIP code-level accuracy. Finally,we are careful to only report results aggregated over thousands ofindividuals. Aside from Bing search logs, all of the data sources weuse are publicly available and aggregated over many individuals.3 VACCINE INTENT CLASSIFIEROur first goal is to develop a classifier that can accurately detectwhen a search user is expressing vaccine intent, i.e., trying to getthe COVID-19 vaccine (e.g., book an appointment or find a loca-tion). Detecting vaccine intent requires precision: for example, if2Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAu1u2q1q2q3q4u1u2u3.........Step 1: URL candidatesPersonalized PageRankStep 2: AnnotationAmazon Mechanical Turkq1q2q3q4u1u2u3......Step 3: URL expansionGraph neural networkGiven that a person clicked on this page during a search session, how sure are you that this person is seeking to get the COVID-19 vaccine?Figure 2: Our pipeline of methods to identify a large, high-precision set of vaccine intent URLs.a user issues the query [covid vaccine], they may be trying to getthe vaccine, but they could also be generally curious about vaccineinformation or eligibility. Thus, we begin by defining a set of regu-lar expressions that allow us to identify vaccine intent queries, i.e.,queries that unambiguously express vaccine intent. To be included,the query must include both a COVID-19 term (“covid” or “coro-navirus”) and a vaccine term (“vaccin”, “vax”, “johnson”, etc.). Inaddition, the query must satisfy at least one of the following criteria:(1) matching some variant of “find me a COVID-19 vaccine”, (2)containing appointment-related words or location-seeking words,(3) containing a pharmacy name.However, in addition to maintaining high precision, we seek todetect as many users as possible who have expressed vaccine intent,so that we have sufficient statistical power for our downstreamanalyses. Since our search logs contain both queries and clicks, welose the opportunity to detect many more users if we only detectvaccine intent based on queries. For example, a user may issue theambiguous query [covid vaccine], but then click on the URL forthe CVS COVID-19 vaccine registration page, thus clarifying theirintent through their clicks [ 61]. The challenge with URLs is thatthey are less formulaic than queries, so we cannot easily defineregular expressions to identify URLs expressing vaccine intent.Our key insight is that, while we cannot use regular expressionsto identify URLs, we can use them to identify vaccine intent queriesand then use those queries to identify URLs, based on commonquery-click patterns. For example, vaccine intent queries such as[cvs covid vaccine] or [covid vaccine near me] may result in clickson the CVS COVID-19 vaccine registration page. To capture thesepatterns, we construct query-click graphs [20,45], which are bipar-tite networks between queries and URLs where an edge from aquery to a URL indicates how often this query is followed by a clickon this URL. Specifically, we construct a query-click graph per USstate, aggregating over queries and clicks from two representativemonths in our study period (April and August 2021). Then, ourpipeline proceeds in three steps (Figure 2): first, we use personal-ized PageRank to propagate labels from queries to URLs, so that wecan generate a set of URL candidates (Section 3.1); next, we presentthe URL candidates to annotators on Amazon Mechanical Turk tolabel as vaccine intent or not (Section 3.2); finally, we use thoselabels to train graph neural networks (GNNs) so that we can furtherexpand our set of vaccine intent URLs (Section 3.3).State URLCA https://myturn.ca.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.goodrx.com/covid-19/walgreenshttps://www.costco.com/covid-vaccine.htmlhttps://www.walgreens.com/topic/promotion/covid-vaccine.jspNY https://covid19vaccine.health.ny.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://vaccinefinder.nyc.gov/https://www.goodrx.com/covid-19/walgreensTX https://www.cvs.com/immunizations/covid-19-vaccinehttps://vaccine.heb.com/https://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://corporate.walmart.com/covid-vaccinehttps://dshs.texas.gov/covidvaccine/FL https://www.publix.com/covid-vaccinehttps://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://floridahealthcovid19.gov/vaccines/https://www.goodrx.com/covid-19/walgreensTable 1: Top 5 URLs from Personalized PageRank (S-PPR) forthe four largest states in the US.3.1 Personalized PageRank for URL candidatesPersonalized PageRank [ 14] is a common technique for seed expan-sion, where a set of seed nodes in a graph are identified as membersof a community, and one wishes to expand from that set to identifymore community members [ 40]. In our case, the vaccine intentqueries act as our seed set, and our goal is to spread the influencefrom the seed set over the rest of the query-click graph. Given aseed setS, personalized PageRank derives a score for each node inthe graph that represents the probability of landing on that nodewhen running random walks from S.We run personalized PageRank from the seed set of vaccineintent queries (S-PRR) to derive scores for all URLs in each query-click graph. Then, we order the URLs from each state according totheir S-PPR ranking and keep the union over states of their top 100URLs as our set of URL candidates, resulting in 2,483 candidates.The number of URLs we have in the union is much lower than thenumber of states multiplied by 100, since there is overlap betweenstates. However, there is also substantial heterogeneity in top URLsacross states, reflecting state-specific vaccine programs and policies(Table 1). By constructing separate graphs and running S-PPR perstate, our approach is uniquely able to capture this state-specificheterogeneity. In supplementary experiments, we show that an al-ternative approach that uses a combined graph over states severelyhurts performance for small states (Section A2.2).S-PPR also provides scores for all queries in the graph, but wefound that the seed set was comprehensive in identifying vaccineintent queries. The top-ranked queries that were not in the seed settended to be location-specific, such as [covid vaccine new york],which is suggestive of vaccine intent but not unambiguous enough.Thus, in the subsequent steps of annotation and GNN expansion,we only seek to add URLs, and consider regular expressions suffi-cient for identifying queries. However, we also selected a sample3epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzof regular expression-detected queries to present to annotators, tovalidate whether they were truly vaccine intent. To capture a di-verse sample, we use the union over the top 5 and bottom 5 queriesper state (ranked by S-PPR), after filtering out queries that wereissued by fewer than 50 users, resulting in 227 queries to label.3.2 Annotation on Amazon Mechanical TurkIn this step, we present our URL candidates (and sampled queries)to annotators on AMT. For each URL, we first present it to threeannotators. If all three give it a positive label (i.e., Highly Likely orLikely), then we label this URL as vaccine intent. If two give it apositive label and one does not, we assign it to one more annotator,and label it as vaccine intent if that annotator gives a positive label.In other words, we require vaccine intent URLs to receive threepositive annotations. With this relatively strict bar, we still find thata large majority (86%) of our URL candidates are labeled as vaccineintent. Furthermore, we observe a clear relationship between S-PPRrank and the percentage labeled as vaccine intent: for example,around 90% of URLs from ranks 0 to 20, around 81% of URLs fromranks 40-60, and around 71% of URLs from ranks 80 to 100 (FigureA2). We also find a very high positive rate (96%) among the queriesthat we tested, thus validating our regular expressions.3.3 Graph neural networks for expansionSince manual annotation is expensive, we wish to augment ourefforts by training ML models on the AMT labels, then use themodels to expand our set of vaccine intent URLs. We formulate thisproblem as semi-supervised node classification on a graph, sincethe URLs are nodes in the query-click graph and we are trying topredict whether a URL indicates vaccine intent or not, given labelsfor a subset of URLs. In this section, we provide an overview of ourmodeling procedure, with details in Section A1.GNN architecture and training. To solve this problem, we designa GNN [ 39] that consists of character-level convolutions (CNN)and graph convolutions. We use the CNNs to capture textual infor-mation in the queries and URLs, since text can be informative forthis problem (e.g., the appearance of “vaccine”). The graph convo-lutions allow us to learn representations of URLs that draw fromthe representations of their neighboring queries, which draw fromthe representations of their neighboring URLs, and so on. In thisway, we can capture “similar” URLs in embedding space (similar interms of both text and graph structure).To train and test our model, we randomly split the URL labelsinto a train set (60%), validation set (15%), and test set (25%). How-ever, some states have much smaller graphs, and therefore, fewerpositive and negative labels. For example, for Wyoming, we onlyhave 245 positive and 276 negative URLs. We find that with suchfew labels, the model cannot adequately learn how to predict vac-cine intent, with AUCs far below those of large states (Table A1). Toaddress this issue, we pre-train the model on S-PPR rankings, whichrequires no additional supervision. Our intuition is that S-PPR al-ready performed remarkably well at predicting vaccine intent, aswe discussed in the prior section. Furthermore, S-PPR rankings donot require any manual labels; we derive them entirely from ourinitial vaccine intent queries, which were automatically labeledusing regular expressions. This pre-training encourages the modelto learn URL representations that are predictive of S-PPR rankings,which we find help substantially with predicting vaccine intent.Evaluating GNN performance. We evaluate model performanceby computing its AUC on the held-out test set. Furthermore, toaccount for randomness from model training and data splitting,we run 10 random trials for every model/state, where in each trial,we re-split the URL labels, retrain the model on the train set, andre-evaluate the model’s performance on the test set. First, we findthat pre-training significantly improves performance for the smallerstates; for example, the mean AUC for Wyoming increases from 0.74to 0.95 (Figure 3a, Table A1). We find that pre-training seems un-necessary for the larger states, such as Connecticut and Tennesssee,where we are already achieving high AUCs above 0.98. After in-corporating pre-training for smaller states (fewer than 5,000,000nodes), we are able to achieve AUCs above 0.90 for all 50 states andabove 0.95 for 45 states (Figure 3b).Discovering new vaccine intent URLs. Finally, we use our trainedGNNs to identify new vaccine intent URLs. In order to decide whichnew URLs to include, we need a score threshold. Our goal is to setthe threshold such that any URL that scores above it is very likelyto truly be vaccine intent (i.e., we want to maintain high precision).Borrowing the idea of “spies” from positive-unlabeled learning [ 8],our idea is to use the held-out positive URLs in the test set todetermine where to set the threshold. We consider two thresholds:(1)tmed, the median score of the held-out positive URLs, and (2)tprec, the minimum threshold required to achieve precision of atleast 0.9 on the held-out test set. Then, we only include URLs thatpass both thresholds in at least 6 out of the 10 random trials. Evenwith this strict threshold, we discover around 11,400 new URLs(Table A2), increasing our number of vaccine intent URLs by 10x. Inthe following section, we also evaluate the impact of adding theseURLs on our ability to estimate regional vaccine intent rates. Wefind that the new URLs not only increase our coverage of vaccineintent users by 1.5x but also further improve our agreement withreported vaccination rates from the CDC (Table 2).4 ESTIMATING VACCINE INTENT RATESUsing our classifier, we can estimate regional rates of vaccine intent.In this section, we discuss how we correct for bias in our estimates,validate against CDC vaccination rates, and use our estimates toderive insights about fine-grained vaccination trends.Bias evaluation. In Section A2, we decompose potential bias inour approach into two key sources: first, bias from non-uniformBing coverage, and second, bias from non-uniform true positiverates (TPR) and false positive rates (FPR) of our classifier. We showthat, if we can correct for non-uniform Bing coverage and showthat our classifier’s TPRs and FPRs do not significantly differ acrossregions, our vaccine intent estimates should, theoretically, formunbiased estimates of true vaccination rates. We evaluate our clas-sifier’s TPRs and FPRs on held-out vaccine intent labels, using thesame score threshold we used for discovering new vaccine intentURLs. We find that our classifier does indeed achieve statisticallyequivalent TPRs and FPRs across states (Figure 3b), suggesting thatour classifier contributes minimal additional bias. We discuss belowhow we correct for non-uniform Bing coverage. Additionally, to4Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(b)Results across all statesWith pre-trainingWithout pre-trainingWith pre-training for smaller statesWyomingArea under ROC curveTrue positive rateFalse positive rateTrue positive rateFalse positive rateFalse positive rate# nodes in state graph# nodes in state graph# nodes in state graphFigure 3: (a) GNN results with and without pre-training for Wyoming, one of the smallest states. Each line represents one of 10random trials. (b) Final GNN results for all 50 states, with pre-training for smaller states. Each dot represents a state, with itsy-coordinate representing the mean metric over 10 trials and grey bars indicating standard deviation.Pipeline step CDC corr. # vaccine intent usersOnly queries 0.62 3.18M+manual URLs 0.80 4.95M+manual and GNN URLs 0.86 7.45MTable 2: Each step of our classification pipeline (Section 3)improves both our correlation with CDC vaccination ratesand our coverage of vaccine intent users.evaluate the representativeness of Bing data, we compare searchtrends for vaccine intent queries between Google and Bing and findthat, even before applying corrections to Bing data, the trends arehighly correlated (Figure A4).Estimating coverage-corrected rates. When we apply our classifierto Bing search logs from Feburary 1 to August 31, 2021, we find 7.45million “active” Bing users who expressed vaccine intent throughtheir queries or clicks. We focus on active Bing users, i.e., thosewho issued at least 30 queries in a month, since we can reliablyassign them to a location based on their mode ZIP code (or countyor state) from those queries. Given a ZCTA z, we compute N(ˆv,z),the number of active Bing users from zfor whom we detect vaccineintent. Furthermore, we estimate the ZCTA’s Bing coverage asN(b,z)N(z), whereN(b,z)is its average number of active Bing usersover the months in our study period and N(z)is its population sizefrom the 2020 5-year American Community Survey [ 15]. Then, ourcoverage-corrected vaccine intent estimate ̃p(v,z)for ZCTAzis ̃p(v,z)=N(ˆv,z)N(z)N(b,z)N(z)=N(ˆv,z)N(b,z).To estimate the vaccine intent rate for a set Zof ZCTAs, e.g., a stateor county, we simply take the population-weighted average.Comparison to CDC vaccination data. When we compare ourvaccine intent estimates to state-level vaccination rates from theCDC, we observe strong correlation ( r=0.86) on cumulative ratesat the end of August 2021 (Figure 4). Notably, we find that the cor-relation drops to r=0.79if we do not correct for Bing coveragein our estimates. Furthermore, we find that each step of our clas-sification pipeline—only using queries from regular expressions,Proportion of state users with vaccine intentProportion of state population vaccinated (CDC)Figure 4: Comparing CDC state vaccination rates vs. esti-mated vaccine intent rates from Bing search logs.Lag from vaccine intent to CDC reportingProp. state populationgetting first doseProp. state usersshowing first vaccine intentVaccine intent from search logsVaccination data from CDCFigure 5: Rates over time of first vaccine intent (top) vs. firstdose from CDC (bottom) for the four largest states in the US.incorporating manually annotated URLs from personalized PageR-ank and AMT, incorporating URLs found by GNNs—improves bothour correlation with CDC rates and the number of users we are able5epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz(a)(b)(c)United StatesEstimated vaccine intent rate per ZCTAProportion of users with vaccine intentNew York City Estimated vaccine intent rate per ZCTAManhattanStaten IslandBronxQueensBrooklynUS correlation between ZCTA vaccine intent and demographicFigure 6: (a) Using our classifier, we can estimate vaccine intent rates per ZCTA, approximately 10x the granularity of counties.(b) Zooming in on New York City shows that estimated vaccine intent rates vary substantially across ZCTAs, even within thesame city or county. (c) Correlations between ZCTA vaccine intent rates and demographic variables.to identify (Table 2). Notably, if we only use queries, the correlationdrops tor=0.62and we lose 57% of the users we identified withour full classifier, demonstrating the value of adding vaccine intentURLs through our graph ML framework.Additionally, we compare our vaccine intent estimates to theCDC’s vaccination rates over time. We observe strong correlationshere as well, especially if we allow the CDC time series to lag behindthe vaccine intent time series (Figure 5). With lags of 7-15 days(IQR), the median correlation over states reaches r=0.89; withouta lag, the median correlation drops to r=0.78. The CDC’s lagdemonstrates an advantage of our classifier, as it can detect vaccineseeking in real time without delays from reporting.Granular trends in vaccine seeking. Our vaccine intent classifierallows us to pinpoint who was seeking the COVID-19 vaccine,where, and when. We estimate cumulative vaccine intent rates upto the end of August 2021 at the level of ZCTAs (Figure 6a), approx-imately 10x the granularity of counties, which is the finest-grainedvaccination data the CDC provides and, still, with many countiesmissing or having incomplete data [ 70]. We observe substantialheterogeneity in vaccine intent at the ZCTA-level, even within thesame states and counties. For example, when we focus on New YorkCity, we see that Manhattan and Queens have higher vaccine intentrates, and within Queens, ZCTAs in the northern half have higherrates (Figure 6b), aligning with reported local vaccination rates inNew York City [11].We can also use our estimates to characterize demographic trendsin vaccination. When we measure correlations between ZCTA vac-cine intent rate and different demographic variables, we find thatoverall demographic trends from our estimates align closely withprior literature [ 37,41,71,76]. For example, we observe strongpositive correlations with education, income, and population den-sity, and a strong negative correlation with percent Republican(Figure 6c). However, we discover more nuanced trends when welook closer. Demographic trends vary significantly across states(Figure A5), especially for race and ethnicity, and trends changeover time. For example, we estimate that older ZCTAs were muchlikelier to seek the vaccine early in 2021 but this trend fell over time(Figure A6a), reflecting how the US vaccine rollout initially priori-tized seniors [ 38], and we see an increase in vaccine intent frommore Republican ZCTAs in summer 2021 (Figure A6b). Thus, ourclassifier both confirms existing findings and enables new analyseswith finer granularity across regions, demographics, and time.5 SEARCH CONCERNS OF HOLDOUTSWe use our vaccine intent classifier to identify two groups: vaccineearly adopters , who expressed their first vaccine intent before May2021, and vaccine holdouts , who waited until July 2021 to show theirfirst vaccine intent, despite becoming eligible by April.3Comparingthe search interests of these two groups allows us to discover rela-tionships between expressed vaccine concerns, news consumption,and vaccine decision-making. To reduce potential confounding, wematch each holdout with a unique early adopter from the samecounty and with a similar average query count, since we knowthat the populations seeking vaccination changed over time andwe do not want our comparisons to be overpowered by regional ordemographic differences. In our following analyses, we comparethe search interests of the matched sets, with over 200,000 pairs.Vaccine holdouts are more likely to consume untrusted news. First,we analyze the trustworthiness of news sites clicked on by vaccineholdouts versus early adopters. We use ratings from Newsguard,which assigns trust scores to news sites based on criteria suchas how often the site publishes false content and how it handlesthe difference between news and opinion [ 52]. We find that, inthe period while vaccine holdouts were eligible but still holdingout (April to June 2021), holdouts were 69% (95% CI, 67%-70%)likelier than their matched early adopters to click on untrustednews, defined by Newsguard as domains with trust scores below60. Furthermore, we see that as the trust score from Newsguarddegrades, the likelier it was that holdouts clicked on the site, relativeto early adopters (Figure 7a). For example, sites that are known forspreading COVID-19 misinformation, such as Infowars [ 25], RT [ 6],and Mercola [ 31], were much likelier to be clicked on by holdouts.3We did not consider as holdouts those who never showed vaccine intent during ourstudy period, since those users may have gotten their vaccine in ways that are notvisible via search data. In comparison, individuals who did not show their first vaccineintent until July 2021 likely did not receive the vaccine before.6Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(d)(b)(c)Religious concernsExpert anti-vaxHigh-profile anti-vaxEerie fearsExemptionAnti-mandateFake vaccine proof FDA approvalVaccine-caused deathsVaccine developmentTravel restrictionsNatural immunityReproductive healthEfficacy against variantsVaccine proofNews on hesitancyEmployee mandatesDecision-makingBreakthrough casesPost-vax guidelinesSevere side effectsSpecial populationsVaccine incentivesVaccine ratesEfficacy from studiesInfo about J&JInfo about PfizerComparing vaccinesNormal side effectsInfo about ModernaLeans towardsvaccine holdoutLeans towardsearly adopterElevated near vaccine intentReduced near vaccine intentSubcategoryFigure 7: In all subfigures, news/categories are colored from yellow to dark purple to represent most holdout-leaning to mostearly adopter-leaning. (a) The lower the trust rating from Newsguard, the likelier it is that vaccine holdouts click on the newssite, relative to early adopters. (b) Holdouts’ top category concerns include Vaccine Safety, Requirements, and Information, withvarying proportions over time. (c) Comparing holdouts vs. early adopters’ relative probabilities of clicking on each subcategory(from April to June 2021) reveals each group’s distinctive concerns. (d) Near when holdouts express vaccine intent ( ±3 days) inJuly and August 2021, their concerns become much more like the concerns of early adopters, with a few important differences.Ontology of vaccine concerns on search. To characterize vaccine-related search interests in far more detail, we construct a hier-archical ontology of vaccine concerns, defined in terms of 25,000vaccine-related URLs that were clicked on by early adopters or hold-outs. We construct our ontology from the bottom-up: first, we seekto automatically partition the URLs into clusters. Leveraging graphML again, we formulate this as a community detection problemon graphs, and apply the Louvain algorithm [ 12] to the collapsedURL-URL graph (collapsing the bipartite query-click graph overqueries). We find that this approach results in remarkably coher-ent clusters (Table A3), due to the strength of the signal containedin query-click graphs, and outperforms standard topic modelingapproaches such as LDA [ 10]. Based on these clusters, we designa comprehensive set of subcategories and top categories, and sortthe clusters accordingly. For example, we identify one cluster ofnews stories announcing vaccine passport requirements in cities,which we sort under the proof of vaccination subcategory and Vac-cine Requirements top category. This bottom-up approach allowsus to discover and measure vaccine concerns directly from users’search interests and analyze them at multiple scales, providingcomplementary insights to more traditional surveys.In Figure A1, we summarize our resulting ontology, which con-sists of 8 top categories and 36 subcategories. Some top categoriesencompass a number of distinct subcategories: for example, underVaccine Safety, we include normal side effects, severe side effects,concerns about reproductive health, vaccine history and develop-ment, FDA approval, fear of vaccine-caused deaths, and “eerie” fears(e.g., myths about vaccine shedding or becoming magnetic [ 28]).At the top category-level, we find that vaccine holdouts are, by far,the most concerned about Vaccine Safety, which accounts for 23%of their vaccine-related clicks, followed by Vaccine Information(10%) and Vaccine Requirements (9%). We also observe changesin interests over time (Figure 7b): for example, interest in VaccineIncentives increased in May 2021, and interest in Vaccine Effective-ness grew in June 2021, following the spread of the Delta variant.Distinctive concerns of holdouts vs. early adopters. Our ontologyallows us to compare the vaccine concerns of holdouts and theirmatched early adopters. First, during the period from April to June2021, we find that holdouts were 48% less likely than early adoptersto click on any vaccine-related URL. Furthermore, their distributionof concerns within their vaccine-related clicks differed significantly(Figure 7c). Using the subcategories from our ontology, we findthat holdouts were far more interested in religious concerns aboutthe vaccine; anti-vaccine messages from experts and high-profilefigures; avoiding vaccine requirements by seeking exemptions, ban-ning mandates, or obtaining fake proof of vaccination; eerie fearsand vaccine-caused deaths; and FDA approval and vaccine develop-ment. In comparison, early adopters were much more concerned7epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzabout normal side effects, vaccine efficacy, comparing differenttypes of vaccines, and information about each vaccine (Moderna,Pfizer, and Johnson & Johnson). These differences reveal the impor-tance of a fine-grained ontology; for example, at the top categorylevel, we would see that both groups were interested in VaccineSafety but miss that early adopters were more concerned about nor-mal and severe side effects, while holdouts were more concernedabout eerie fears and vaccine-caused deaths. Our approach alsoallows us to study who is expressing these concerns in greater gran-ularity. Even within holdouts, we observe significant variabilityin concerns across demographic groups (Figure A7). For example,holdouts from more Democrat-leaning ZCTAs were particularlyconcerned about FDA approval and vaccine requirements, whileholdouts from more Republican-leaning ZCTAs were more con-cerned about eerie fears and vaccine incentives.Holdouts appear like early adopters when seeking the vaccine.In our final analysis, we exploit the fact that all of our vaccineholdouts eventually expressed vaccine intent to explore how vac-cine concerns change as an individual converts from holdout toadopter. From July to August 2021, we analyze how holdouts’ vac-cine concerns change in the small window ( ±3days) surroundingtheir expressed vaccine intent, compared to their typical concernsoutside of that window. We find that in those windows, holdouts’vaccine concerns nearly reverse, such that they look much morelike early adopters than their typical selves (Figure 7d nearly re-verses 7c). During this time, holdouts become far more interestedin the Johnson & Johnson vaccine, comparing different vaccines,and vaccine incentives, and less interested in anti-vaccine messagesand vaccine fears. Notably, not all early adopter-leaning concernsreverse as dramatically; for example, even while expressing vaccineintent, holdouts remain less interested in the Pfizer and Modernavaccines, which may reflect how vaccine hesitant individuals werequicker to accept the one-shot Johnson & Johnson vaccine, insteadof the two-shot mRNA vaccines [ 21,73]. Furthermore, there aresome early adopter-leaning concerns that holdouts do not pick upon during this time, such as interest in vaccine rates. We hypoth-esize that these concerns are more reflective of an early adopter“persona” rather than of concerns that would become relevant whenseeking the vaccine, such as comparing different vaccines.6 RELATED WORKOur work centers Bing search logs, which have been used to studyother health issues such as shifts in needs and disparities in infor-mation access during the pandemic [ 67,68], health informationneeds in developing nations [ 1], experiences around cancer diag-noses [ 55,56], concerns rising during pregnancy [ 29], and medicalanxieties associated with online search [ 75]. Our efforts build onprior work that extracts insights about the COVID-19 vaccine fromdigital traces, such as social media [ 50,57,58] and aggregated searchtrends [ 7,23,48]. Our work is also related to other efforts to detecthealth conditions online, such as predicting depression from socialmedia [19] and monitoring influenza from search queries [32].Our work seeks to address the challenges of working with digitaltraces [ 24,54] and limitations of prior work [ 32,44] by developingML and human-in-the-loop methods to precisely label search logsand evaluate bias. Furthermore, as one of the first works to use indi-vidual search logs to study the COVID-19 vaccine, we have the rareopportunity to link vaccine outcomes (predicted by our classifier)to the same individual’s search interests. Our graph ML pipeline isalso similar to other “big data” approaches that, due to the scale ofunlabeled data, manually annotate a subset of data, train machinelearning models to accurately predict those labels, then use thosemodels to label the rest of the data [ 17,30,35,47]. We extend thisapproach in several ways, such as by using personalized PageRankto select URLs for more efficient annotation and by setting a strictclassification threshold based on “spies” to ensure high precision.7 DISCUSSIONWe have demonstrated how large-scale search logs and machinelearning can be leveraged for fine-grained, real-time monitoringof vaccine intent rates and identification of individuals’ concernsabout vaccines. There are limitations to our approach: for example,while we can achieve finer granularity than existing data, we stillmiss within-ZCTA heterogeneity in vaccine intent. Furthermore,our efforts to minimize bias in our estimates are substantial butimperfect (e.g., we can only approximate TPRs and FPRs of ourclassifier). We also assume in this work that vaccine intent can bedetected through single queries or clicks, but more sophisticatedmodels could incorporate entire search sessions or browsing databeyond search. However, in favor of simplicity and considerationsof privacy, we label vaccine intent at the query and click-level.Despite these limitations, our resources demonstrate strongagreement with existing data and enable analyses that have not beenavailable before. For example, our fine-grained vaccine intent esti-mates can help public health officials to identify under-vaccinatedcommunities, informing where to place vaccine sites or whom toprioritize in online or real-world outreach programs. Furthermore,our novel ontology and analyses of individuals’ vaccine concernsinform how to intervene, guiding messaging strategies for differentholdout populations. Lastly, our observation that holdouts resembleearly adopters when they eventually seek vaccination indicates thatindividuals might follow similar paths towards vaccine acceptance.Future work could model these trajectories, try to identify key in-fluences (e.g., vaccine mandates), and use these models to ideallyallocate limited resources for interventions.To facilitate policy impact and future research, we are releasingour vaccine intent estimates and our ontology of vaccine concerns.We hope that these resources will be useful for conducting detailedanalyses of COVID-19 vaccine behaviors and vaccination rates. Theontology can also be employed widely in web and social mediaresearch; for example, to study how certain classes of URLs (e.g.,eerie fears) are disseminated on social media or surfaced by searchengines. Finally, we note that our graph ML techniques for intentdetection are applicable beyond vaccines, and could be applied toprecisely detect other intents of interest, such as seeking stimuluschecks or COVID-19 tests. More broadly, we hope that our workcan serve as a roadmap for researchers of how to derive rigorousbehavioral and health insights from search logs, including how toprecisely detect user intents and interests, evaluate and correctfor bias, validate against external data, and release resources topromote reproducibility, transparency, and future work.8Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA
ZvkFh9VHIQ
The authors did an in-depth analysis of the search logs related to the vaccines to detect an individual’s vaccine intent and further discovered insights on the behavioral difference of (i) early vaccine adaptors and (ii) vaccine-resistant groups. Overall, the paper is well written, is original, and would help the community to understand behavioral patterns from the web search logs.
5: Top 50% of accepted papers, clear accept
Summary (Long) - The authors did an in-depth analysis of the search logs related to the vaccines to detect an individual’s vaccine intent and further discovered insights on the behavioral difference of (i) early vaccine adaptors and (ii) vaccine-resistant groups. Their pipeline of the vaccine intent classifier includes finding top candidates for user URLs, using personalized PageRank, followed by annotation via crowdsourcing, and expanding URLs via GNNs. They also prepared an ontology of vaccine concerns by applying a community detection algorithm. Though some of the decisions of model choices are not well justified, overall, the paper is well written, is original, and would help the community to understand behavioral patterns from the web search logs. Strong points (Pros) - Overall, their method could fill the gaps in understanding individual vaccine intentions and behaviors through web search logs. - Their vaccine intention classifier design is well-motivated, easy to follow, and performs well at 0.9 AUC. - Authors did an in-depth study on this problem and provided enough details and additional analyses in the appendix. Weak points (Cons) - The evaluation of their vaccine intention classifier is insufficient, especially because their model is not compared with other baseline methods. If there are no direct methods to evaluate, the authors should do some literature review on somewhat relevant papers that uses search logs in predictive modeling and have those as a set of baselines to compare the performance of the method. - Design decisions of their modeling are often not justified. E.g., in section 3.1, the authors chose to use personalized page rank as it is a common technique for seed expansion methods. In fact, seed set expansion itself is a well-studied problem, and there exist many more methods developed for this problem in the past decade. I’d suggest authors review state-of-the-art methods in the seed set expansion problem and explore some other methods in their pipeline. Some examples are: - Whang, Joyce Jiyoung, David F. Gleich, and Inderjit S. Dhillon. "Overlapping community detection using seed set expansion." Proceedings of the 22nd ACM international conference on Information & Knowledge Management. 2013. - Li Y, He K, Bindel D, Hopcroft JE. Uncovering the small community structure in large networks: A local spectral approach. In Proceedings of the 24th international conference on world wide web 2015 May 18 (pp. 658-668). - Authors claim that vaccine concerns differ significantly within holdouts. If this is true, I am worried that the performance of the ‘binary classifier,’ the vaccine intention classifier, may be suboptimal because there could be a large variance in those in holdouts. In such cases treating the problem as clustering and finding the clusters of holdouts with similar vaccine concerns may make more sense. Minor comments - In the abstract, please provide some details about your claims. E.g. the first claim is ‘vaccine intent classifier that can accurately detect …’ – here, please provide how accurate it was. Also, in the abstract ‘… find that key indicators emerge…’ – please list the indicators (maybe provide the most important ones). - The captions for the tables should be placed on the top of the table, not below the table. - Please justify the usage of CNNs for capturing textual information in the queries and URLs. - Please justify using the Louvain algorithm for the community detection problem in section 5. - There's a typo in section 3.1 . Please change S-PRR to S-PPR
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
fhxHhXTnHc
KDD.org/2023/Workshop/epiDAMIK
2023
Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs
["Serina Chang", "Adam Fourney", "Eric Horvitz"]
To design effective vaccine policies, policymakers need detailed data about who has been vaccinated, who is holding out, and why. However, existing data in the US are insufficient: reported vaccination rates are often delayed or missing, and surveys of vaccine hesitancy are limited by high-level questions and self-report biases. Here, we show how large-scale search engine logs and machine learning can be leveraged to fill these gaps and provide novel insights about vaccine intentions and behaviors. First, we develop a vaccine intent classifier that can accurately detect when a user is seeking the COVID-19 vaccine on search. Our classifier demonstrates strong agreement with CDC vaccination rates, with correlations above 0.86, and estimates vaccine intent rates to the level of ZIP codes in real time, allowing us to pinpoint more granular trends in vaccine seeking across regions, demographics, and time. To investigate vaccine hesitancy, we use our classifier to identify two groups, vaccine early adopters and vaccine holdouts. We find that holdouts, compared to early adopters matched on covariates, are 69% more likely to click on untrusted news sites. Furthermore, we organize 25,000 vaccine-related URLs into a hierarchical ontology of vaccine concerns, and we find that holdouts are far more concerned about vaccine requirements, vaccine development and approval, and vaccine myths, and even within holdouts, concerns vary significantly across demographic groups. Finally, we explore the temporal dynamics of vaccine concerns and vaccine seeking, and find that key indicators emerge when individuals convert from holding out to preparing to accept the vaccine.
["COVID-19", "vaccination", "health behaviors", "misinformation", "search logs", "graph machine learning"]
ABSTRACTTo design effective vaccine policies, policymakers need detaileddata about who has been vaccinated, who is holding out, and why.However, existing data in the US are insufficient: reported vacci-nation rates are often delayed or missing, and surveys of vaccinehesitancy are limited by high-level questions and self-report biases.Here, we show how large-scale search engine logs and machinelearning can be leveraged to fill these gaps and provide novel in-sights about vaccine intentions and behaviors. First, we developavaccine intent classifier that can accurately detect when a useris seeking the COVID-19 vaccine on search. Our classifier demon-strates strong agreement with CDC vaccination rates, with corre-lations above 0.86, and estimates vaccine intent rates to the levelof ZIP codes in real time, allowing us to pinpoint more granulartrends in vaccine seeking across regions, demographics, and time.To investigate vaccine hesitancy, we use our classifier to identifytwo groups, vaccine early adopters andvaccine holdouts . We findthat holdouts, compared to early adopters matched on covariates,are 69% more likely to click on untrusted news sites. Furthermore,we organize 25,000 vaccine-related URLs into a hierarchical ontol-ogy of vaccine concerns, and we find that holdouts are far moreconcerned about vaccine requirements, vaccine development andapproval, and vaccine myths, and even within holdouts, concernsvary significantly across demographic groups. Finally, we explorethe temporal dynamics of vaccine concerns and vaccine seeking,and find that key indicators emerge when individuals convert fromholding out to preparing to accept the vaccine.KEYWORDSCOVID-19, vaccination, search logs, graph machine learningACM Reference Format:Serina Chang†, Adam Fourney, and Eric Horvitz. 2023. Accurate Measuresof Vaccination and Concerns of Vaccine Holdouts from Web Search Logs.InepiDAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop onEpidemiology meets Data Mining and Knowledge Discovery, August 7, 2023,Long Beach, CA, USA. ACM, New York, NY, USA, 19 pages.1 INTRODUCTIONCOVID-19 vaccines provide significant protection against severecases of SARS-CoV-2 [ 46,59], yet a large portion of the United†Research performed during an internship at Microsoft.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA©2023 Copyright held by the owner/author(s).States remains unvaccinated. Effective vaccine policies—for exam-ple, where to place vaccine sites [ 49,74], how to communicateabout the vaccine [ 18,72], and how to design campaigns to reachunvaccinated populations [ 5,22,60]—rely on detailed data aboutwho is seeking vaccination, who is holding out, and why. However,existing data are insufficient [ 43]. Reported vaccination rates are fre-quently delayed [ 2], missing at the county-level and below [ 70], andmissing essential demographic data [ 33,42]. Surveys provide a start-ing point for understanding vaccine hesitancy but are often limitedby high-level questions [ 16], small or biased samples [ 13,71], andself-reporting biases (e.g., recall or social desirability bias) [ 3,66]especially in sensitive contexts such as vaccination [36].Here, we demonstrate how large-scale search logs from Bingand machine learning (ML) can be leveraged to fill these gaps, en-abling fine-grained estimation of vaccine rates and discovering theconcerns of vaccine holdouts from their search interests. Whilesearch logs are powerful, with widespread coverage, real-time sig-nals, and access to personal interests, the vast amounts of data theyprovide are unlabeled and unstructured, consisting of billions ofnatural language queries and clicks on search results. To derivemeaning from these queries and clicks, we first impose structure byconstructing query-click graphs , which encode aggregated query-click patterns as bipartite networks. Second, using a combinationof semi-supervised graph ML techniques and manual annotation,we develop two computational resources that enable us to extractvaccine behaviors from large unlabeled search logs.First, we develop a vaccine intent classifier that can accuratelydetect when a user is seeking the COVID-19 vaccine on search. Ourclassifier achieves areas under the receiver operating characteristiccurve (AUCs) above 0.90 on held-out vaccine intent labels in allstates, and demonstrates strong agreement with CDC vaccinationrates across states ( r=0.86) and over time ( r=0.89). Using ourclassifier, we can estimate vaccine intent rates to the level of ZIPcode tabulation areas (ZCTAs), approximately 10x the granularityof counties and preceding lags in reporting. We carefully correct forbias in our estimates from non-uniform Bing coverage, and demon-strate minimal additional bias from our classifier, as it achievesequivalent true and false positive rates across regions.Second, we construct a novel ontology of COVID-19 vaccine con-cerns on search. Our ontology consists of 25,000 vaccine-relatedURLs, clicked on by Bing users, that we organize into a hierarchy ofvaccine concerns from eight top categories to 36 subcategories to156 low-level URL clusters. Unlike surveys, our ontology discoversthese concerns directly from users’ expressed interests and exploresthem at multiple scales. Furthermore, by measuring individuals’interest in each concern from their clicks, we capture revealed pref-erences, side-stepping potential biases in self-reporting [24, 66].1epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. HorvitzCombining our ontology with the vaccine intent classifier al-lows us to conduct a thorough analysis of how individuals’ vaccineconcerns relate to whether they decide to seek the vaccine. Weuse our classifier to identify two groups of users—vaccine earlyadopters and vaccine holdouts—and compare their search behav-iors. We identify significant differences in their vaccine concernsand news consumption; for example, compared to early adoptersmatched on covariates, vaccine holdouts are 69% more likely to clickon untrusted news sites. We find that vaccine concerns also differsignificantly even within holdouts, varying across demographicgroups. Finally, we analyze the temporal dynamics of vaccine con-cerns and vaccine seeking, and discover that individuals exhibittelltale shifts in vaccine concerns when they eventually convertfrom holding out to preparing to accept the vaccine.Our contributions can be summarized as follows:(1)A novel vaccine intent classifier, developed with graph MLand human annotation, that achieves AUCs above 0.9 on allstates and strong agreement with CDC vaccination rates;(2)Bias-corrected estimates of vaccine intent rates from ourclassifier, including estimates for over 20,000 ZCTAs;(3)A hierarchical ontology of COVID-19 vaccine concerns, in-cluding 25,000 URLs clicked on by Bing users, 156 URL clus-ters, 36 subcategories, and eight top categories;(4)Analyses of vaccine holdouts’ search concerns and newsconsumption, comparing to early adopters and studyingdynamics over time.We are publicly releasing our code, vaccine estimates, and ontol-ogy.1We hope that our resources, methods, and analyses can pro-vide researchers and public health agencies with valuable insightsabout vaccine behaviors, helping to guide more effective, data-driven interventions.2 DATAOur work uses a variety of datasets, including Bing search logs,CDC vaccination rates, US Census data, and Newsguard labels(Figure 1). Bing is the second largest search engine worldwide andin the US, with a US market share of around 6% on all platforms andaround 11% on desktop [ 65]. Despite having non-uniform coverageacross the US, Bing has enough penetration in the US that we canestimate representative samples after applying inverse proportionalweighting (Section 4). The Bing data we use consist of individualqueries made by users, where for each query, we have informationincluding the text of the query, an anonymized ID of the user, thetimestamp, the estimated geolocation (ZIP code, county, and state),and the set of URLs clicked on, if any. Since our work is motivatedby insufficient vaccine data and vaccine concerns in the US, we limitour study to search logs in the US market. However, the methods weintroduce could be extended to study vaccination rates and vaccineconcerns in other languages and countries. We apply our vaccineintent classifier (Section 3) to all Bing search logs in the US fromFebruary 1 to August 31, 2021.21https://github.com/microsoft/vaccine_search_study.2February 2021 was the earliest that we could study following data protection guide-lines, which allow us to store and analyze search logs up to 18 months in the past.We end in August 2021, since the FDA approved booster shots in September and ourmethod is not designed to disambiguate between vaccine seeking for the primaryseries versus boosters.Bing search logsOntology of vaccine concernsVaccine intent estimatesZIP , county, stateVaccine concerns of holdouts vs. early adoptersMatched vaccine holdouts and early adoptersNews consumption of holdouts vs. early adoptersDemographic trends in vaccine intentNewsguardlabelsCDC vaccination ratesGoogle search trendsUS Census dataVal.Val.Methods: community detection on graphs, manual annotationMethods: PageRank, GNNs, manual annotation, bias correctionExternal dataOur workLegendVal.:validationFigure 1: Our work integrates a variety of datasets and meth-ods to analyze vaccine behaviors from search logs.To evaluate our vaccine intent classifier, we compare it to vacci-nation rates reported by the CDC (Section 4). The CDC providesdaily vaccination rates at the levels of states [ 27] and counties [ 26].CDC data are essential but limited, with a substantial portion ofcounty-level data missing. These limitations serve as one of themotivations of our work, since we hope that our vaccine intent clas-sifier can serve as a complementary resource to monitor vaccinationrates, especially in smaller regions. To characterize demographictrends in vaccine intent, we use data from the US Census’ 20205-year American Community Survey [ 15]. To capture political lean,we use county-level data from the 2020 US presidential election [ 53].To quantify the trustworthiness of different news sites, we use labelsfrom Newsguard [ 52]. Finally, to evaluate the representativenessof Bing search trends, we compare them to Google search trends,which are publicly available online [34].Data ethics. Our work was approved by the Microsoft IRB officeand by an internal privacy review process which included officersfrom both Microsoft Research and the Bing product team. When weuse search logs, we are mindful of the need to balance privacy andsocial benefits when using potentially sensitive user data. Whilewe study individual search logs, since we need to be able to link in-dividual vaccine outcomes (as predicted by our classifier) to searchinterests, those sessions are assembled using only anonymous useridentifiers, which are disassociated from any specific user accountsor user profiles, and cannot be linked to any other Microsoft prod-ucts. Likewise, in this anonymous view of the logs, location anddemographic data were limited to ZIP code-level accuracy. Finally,we are careful to only report results aggregated over thousands ofindividuals. Aside from Bing search logs, all of the data sources weuse are publicly available and aggregated over many individuals.3 VACCINE INTENT CLASSIFIEROur first goal is to develop a classifier that can accurately detectwhen a search user is expressing vaccine intent, i.e., trying to getthe COVID-19 vaccine (e.g., book an appointment or find a loca-tion). Detecting vaccine intent requires precision: for example, if2Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAu1u2q1q2q3q4u1u2u3.........Step 1: URL candidatesPersonalized PageRankStep 2: AnnotationAmazon Mechanical Turkq1q2q3q4u1u2u3......Step 3: URL expansionGraph neural networkGiven that a person clicked on this page during a search session, how sure are you that this person is seeking to get the COVID-19 vaccine?Figure 2: Our pipeline of methods to identify a large, high-precision set of vaccine intent URLs.a user issues the query [covid vaccine], they may be trying to getthe vaccine, but they could also be generally curious about vaccineinformation or eligibility. Thus, we begin by defining a set of regu-lar expressions that allow us to identify vaccine intent queries, i.e.,queries that unambiguously express vaccine intent. To be included,the query must include both a COVID-19 term (“covid” or “coro-navirus”) and a vaccine term (“vaccin”, “vax”, “johnson”, etc.). Inaddition, the query must satisfy at least one of the following criteria:(1) matching some variant of “find me a COVID-19 vaccine”, (2)containing appointment-related words or location-seeking words,(3) containing a pharmacy name.However, in addition to maintaining high precision, we seek todetect as many users as possible who have expressed vaccine intent,so that we have sufficient statistical power for our downstreamanalyses. Since our search logs contain both queries and clicks, welose the opportunity to detect many more users if we only detectvaccine intent based on queries. For example, a user may issue theambiguous query [covid vaccine], but then click on the URL forthe CVS COVID-19 vaccine registration page, thus clarifying theirintent through their clicks [ 61]. The challenge with URLs is thatthey are less formulaic than queries, so we cannot easily defineregular expressions to identify URLs expressing vaccine intent.Our key insight is that, while we cannot use regular expressionsto identify URLs, we can use them to identify vaccine intent queriesand then use those queries to identify URLs, based on commonquery-click patterns. For example, vaccine intent queries such as[cvs covid vaccine] or [covid vaccine near me] may result in clickson the CVS COVID-19 vaccine registration page. To capture thesepatterns, we construct query-click graphs [20,45], which are bipar-tite networks between queries and URLs where an edge from aquery to a URL indicates how often this query is followed by a clickon this URL. Specifically, we construct a query-click graph per USstate, aggregating over queries and clicks from two representativemonths in our study period (April and August 2021). Then, ourpipeline proceeds in three steps (Figure 2): first, we use personal-ized PageRank to propagate labels from queries to URLs, so that wecan generate a set of URL candidates (Section 3.1); next, we presentthe URL candidates to annotators on Amazon Mechanical Turk tolabel as vaccine intent or not (Section 3.2); finally, we use thoselabels to train graph neural networks (GNNs) so that we can furtherexpand our set of vaccine intent URLs (Section 3.3).State URLCA https://myturn.ca.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.goodrx.com/covid-19/walgreenshttps://www.costco.com/covid-vaccine.htmlhttps://www.walgreens.com/topic/promotion/covid-vaccine.jspNY https://covid19vaccine.health.ny.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://vaccinefinder.nyc.gov/https://www.goodrx.com/covid-19/walgreensTX https://www.cvs.com/immunizations/covid-19-vaccinehttps://vaccine.heb.com/https://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://corporate.walmart.com/covid-vaccinehttps://dshs.texas.gov/covidvaccine/FL https://www.publix.com/covid-vaccinehttps://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://floridahealthcovid19.gov/vaccines/https://www.goodrx.com/covid-19/walgreensTable 1: Top 5 URLs from Personalized PageRank (S-PPR) forthe four largest states in the US.3.1 Personalized PageRank for URL candidatesPersonalized PageRank [ 14] is a common technique for seed expan-sion, where a set of seed nodes in a graph are identified as membersof a community, and one wishes to expand from that set to identifymore community members [ 40]. In our case, the vaccine intentqueries act as our seed set, and our goal is to spread the influencefrom the seed set over the rest of the query-click graph. Given aseed setS, personalized PageRank derives a score for each node inthe graph that represents the probability of landing on that nodewhen running random walks from S.We run personalized PageRank from the seed set of vaccineintent queries (S-PRR) to derive scores for all URLs in each query-click graph. Then, we order the URLs from each state according totheir S-PPR ranking and keep the union over states of their top 100URLs as our set of URL candidates, resulting in 2,483 candidates.The number of URLs we have in the union is much lower than thenumber of states multiplied by 100, since there is overlap betweenstates. However, there is also substantial heterogeneity in top URLsacross states, reflecting state-specific vaccine programs and policies(Table 1). By constructing separate graphs and running S-PPR perstate, our approach is uniquely able to capture this state-specificheterogeneity. In supplementary experiments, we show that an al-ternative approach that uses a combined graph over states severelyhurts performance for small states (Section A2.2).S-PPR also provides scores for all queries in the graph, but wefound that the seed set was comprehensive in identifying vaccineintent queries. The top-ranked queries that were not in the seed settended to be location-specific, such as [covid vaccine new york],which is suggestive of vaccine intent but not unambiguous enough.Thus, in the subsequent steps of annotation and GNN expansion,we only seek to add URLs, and consider regular expressions suffi-cient for identifying queries. However, we also selected a sample3epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzof regular expression-detected queries to present to annotators, tovalidate whether they were truly vaccine intent. To capture a di-verse sample, we use the union over the top 5 and bottom 5 queriesper state (ranked by S-PPR), after filtering out queries that wereissued by fewer than 50 users, resulting in 227 queries to label.3.2 Annotation on Amazon Mechanical TurkIn this step, we present our URL candidates (and sampled queries)to annotators on AMT. For each URL, we first present it to threeannotators. If all three give it a positive label (i.e., Highly Likely orLikely), then we label this URL as vaccine intent. If two give it apositive label and one does not, we assign it to one more annotator,and label it as vaccine intent if that annotator gives a positive label.In other words, we require vaccine intent URLs to receive threepositive annotations. With this relatively strict bar, we still find thata large majority (86%) of our URL candidates are labeled as vaccineintent. Furthermore, we observe a clear relationship between S-PPRrank and the percentage labeled as vaccine intent: for example,around 90% of URLs from ranks 0 to 20, around 81% of URLs fromranks 40-60, and around 71% of URLs from ranks 80 to 100 (FigureA2). We also find a very high positive rate (96%) among the queriesthat we tested, thus validating our regular expressions.3.3 Graph neural networks for expansionSince manual annotation is expensive, we wish to augment ourefforts by training ML models on the AMT labels, then use themodels to expand our set of vaccine intent URLs. We formulate thisproblem as semi-supervised node classification on a graph, sincethe URLs are nodes in the query-click graph and we are trying topredict whether a URL indicates vaccine intent or not, given labelsfor a subset of URLs. In this section, we provide an overview of ourmodeling procedure, with details in Section A1.GNN architecture and training. To solve this problem, we designa GNN [ 39] that consists of character-level convolutions (CNN)and graph convolutions. We use the CNNs to capture textual infor-mation in the queries and URLs, since text can be informative forthis problem (e.g., the appearance of “vaccine”). The graph convo-lutions allow us to learn representations of URLs that draw fromthe representations of their neighboring queries, which draw fromthe representations of their neighboring URLs, and so on. In thisway, we can capture “similar” URLs in embedding space (similar interms of both text and graph structure).To train and test our model, we randomly split the URL labelsinto a train set (60%), validation set (15%), and test set (25%). How-ever, some states have much smaller graphs, and therefore, fewerpositive and negative labels. For example, for Wyoming, we onlyhave 245 positive and 276 negative URLs. We find that with suchfew labels, the model cannot adequately learn how to predict vac-cine intent, with AUCs far below those of large states (Table A1). Toaddress this issue, we pre-train the model on S-PPR rankings, whichrequires no additional supervision. Our intuition is that S-PPR al-ready performed remarkably well at predicting vaccine intent, aswe discussed in the prior section. Furthermore, S-PPR rankings donot require any manual labels; we derive them entirely from ourinitial vaccine intent queries, which were automatically labeledusing regular expressions. This pre-training encourages the modelto learn URL representations that are predictive of S-PPR rankings,which we find help substantially with predicting vaccine intent.Evaluating GNN performance. We evaluate model performanceby computing its AUC on the held-out test set. Furthermore, toaccount for randomness from model training and data splitting,we run 10 random trials for every model/state, where in each trial,we re-split the URL labels, retrain the model on the train set, andre-evaluate the model’s performance on the test set. First, we findthat pre-training significantly improves performance for the smallerstates; for example, the mean AUC for Wyoming increases from 0.74to 0.95 (Figure 3a, Table A1). We find that pre-training seems un-necessary for the larger states, such as Connecticut and Tennesssee,where we are already achieving high AUCs above 0.98. After in-corporating pre-training for smaller states (fewer than 5,000,000nodes), we are able to achieve AUCs above 0.90 for all 50 states andabove 0.95 for 45 states (Figure 3b).Discovering new vaccine intent URLs. Finally, we use our trainedGNNs to identify new vaccine intent URLs. In order to decide whichnew URLs to include, we need a score threshold. Our goal is to setthe threshold such that any URL that scores above it is very likelyto truly be vaccine intent (i.e., we want to maintain high precision).Borrowing the idea of “spies” from positive-unlabeled learning [ 8],our idea is to use the held-out positive URLs in the test set todetermine where to set the threshold. We consider two thresholds:(1)tmed, the median score of the held-out positive URLs, and (2)tprec, the minimum threshold required to achieve precision of atleast 0.9 on the held-out test set. Then, we only include URLs thatpass both thresholds in at least 6 out of the 10 random trials. Evenwith this strict threshold, we discover around 11,400 new URLs(Table A2), increasing our number of vaccine intent URLs by 10x. Inthe following section, we also evaluate the impact of adding theseURLs on our ability to estimate regional vaccine intent rates. Wefind that the new URLs not only increase our coverage of vaccineintent users by 1.5x but also further improve our agreement withreported vaccination rates from the CDC (Table 2).4 ESTIMATING VACCINE INTENT RATESUsing our classifier, we can estimate regional rates of vaccine intent.In this section, we discuss how we correct for bias in our estimates,validate against CDC vaccination rates, and use our estimates toderive insights about fine-grained vaccination trends.Bias evaluation. In Section A2, we decompose potential bias inour approach into two key sources: first, bias from non-uniformBing coverage, and second, bias from non-uniform true positiverates (TPR) and false positive rates (FPR) of our classifier. We showthat, if we can correct for non-uniform Bing coverage and showthat our classifier’s TPRs and FPRs do not significantly differ acrossregions, our vaccine intent estimates should, theoretically, formunbiased estimates of true vaccination rates. We evaluate our clas-sifier’s TPRs and FPRs on held-out vaccine intent labels, using thesame score threshold we used for discovering new vaccine intentURLs. We find that our classifier does indeed achieve statisticallyequivalent TPRs and FPRs across states (Figure 3b), suggesting thatour classifier contributes minimal additional bias. We discuss belowhow we correct for non-uniform Bing coverage. Additionally, to4Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(b)Results across all statesWith pre-trainingWithout pre-trainingWith pre-training for smaller statesWyomingArea under ROC curveTrue positive rateFalse positive rateTrue positive rateFalse positive rateFalse positive rate# nodes in state graph# nodes in state graph# nodes in state graphFigure 3: (a) GNN results with and without pre-training for Wyoming, one of the smallest states. Each line represents one of 10random trials. (b) Final GNN results for all 50 states, with pre-training for smaller states. Each dot represents a state, with itsy-coordinate representing the mean metric over 10 trials and grey bars indicating standard deviation.Pipeline step CDC corr. # vaccine intent usersOnly queries 0.62 3.18M+manual URLs 0.80 4.95M+manual and GNN URLs 0.86 7.45MTable 2: Each step of our classification pipeline (Section 3)improves both our correlation with CDC vaccination ratesand our coverage of vaccine intent users.evaluate the representativeness of Bing data, we compare searchtrends for vaccine intent queries between Google and Bing and findthat, even before applying corrections to Bing data, the trends arehighly correlated (Figure A4).Estimating coverage-corrected rates. When we apply our classifierto Bing search logs from Feburary 1 to August 31, 2021, we find 7.45million “active” Bing users who expressed vaccine intent throughtheir queries or clicks. We focus on active Bing users, i.e., thosewho issued at least 30 queries in a month, since we can reliablyassign them to a location based on their mode ZIP code (or countyor state) from those queries. Given a ZCTA z, we compute N(ˆv,z),the number of active Bing users from zfor whom we detect vaccineintent. Furthermore, we estimate the ZCTA’s Bing coverage asN(b,z)N(z), whereN(b,z)is its average number of active Bing usersover the months in our study period and N(z)is its population sizefrom the 2020 5-year American Community Survey [ 15]. Then, ourcoverage-corrected vaccine intent estimate ̃p(v,z)for ZCTAzis ̃p(v,z)=N(ˆv,z)N(z)N(b,z)N(z)=N(ˆv,z)N(b,z).To estimate the vaccine intent rate for a set Zof ZCTAs, e.g., a stateor county, we simply take the population-weighted average.Comparison to CDC vaccination data. When we compare ourvaccine intent estimates to state-level vaccination rates from theCDC, we observe strong correlation ( r=0.86) on cumulative ratesat the end of August 2021 (Figure 4). Notably, we find that the cor-relation drops to r=0.79if we do not correct for Bing coveragein our estimates. Furthermore, we find that each step of our clas-sification pipeline—only using queries from regular expressions,Proportion of state users with vaccine intentProportion of state population vaccinated (CDC)Figure 4: Comparing CDC state vaccination rates vs. esti-mated vaccine intent rates from Bing search logs.Lag from vaccine intent to CDC reportingProp. state populationgetting first doseProp. state usersshowing first vaccine intentVaccine intent from search logsVaccination data from CDCFigure 5: Rates over time of first vaccine intent (top) vs. firstdose from CDC (bottom) for the four largest states in the US.incorporating manually annotated URLs from personalized PageR-ank and AMT, incorporating URLs found by GNNs—improves bothour correlation with CDC rates and the number of users we are able5epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz(a)(b)(c)United StatesEstimated vaccine intent rate per ZCTAProportion of users with vaccine intentNew York City Estimated vaccine intent rate per ZCTAManhattanStaten IslandBronxQueensBrooklynUS correlation between ZCTA vaccine intent and demographicFigure 6: (a) Using our classifier, we can estimate vaccine intent rates per ZCTA, approximately 10x the granularity of counties.(b) Zooming in on New York City shows that estimated vaccine intent rates vary substantially across ZCTAs, even within thesame city or county. (c) Correlations between ZCTA vaccine intent rates and demographic variables.to identify (Table 2). Notably, if we only use queries, the correlationdrops tor=0.62and we lose 57% of the users we identified withour full classifier, demonstrating the value of adding vaccine intentURLs through our graph ML framework.Additionally, we compare our vaccine intent estimates to theCDC’s vaccination rates over time. We observe strong correlationshere as well, especially if we allow the CDC time series to lag behindthe vaccine intent time series (Figure 5). With lags of 7-15 days(IQR), the median correlation over states reaches r=0.89; withouta lag, the median correlation drops to r=0.78. The CDC’s lagdemonstrates an advantage of our classifier, as it can detect vaccineseeking in real time without delays from reporting.Granular trends in vaccine seeking. Our vaccine intent classifierallows us to pinpoint who was seeking the COVID-19 vaccine,where, and when. We estimate cumulative vaccine intent rates upto the end of August 2021 at the level of ZCTAs (Figure 6a), approx-imately 10x the granularity of counties, which is the finest-grainedvaccination data the CDC provides and, still, with many countiesmissing or having incomplete data [ 70]. We observe substantialheterogeneity in vaccine intent at the ZCTA-level, even within thesame states and counties. For example, when we focus on New YorkCity, we see that Manhattan and Queens have higher vaccine intentrates, and within Queens, ZCTAs in the northern half have higherrates (Figure 6b), aligning with reported local vaccination rates inNew York City [11].We can also use our estimates to characterize demographic trendsin vaccination. When we measure correlations between ZCTA vac-cine intent rate and different demographic variables, we find thatoverall demographic trends from our estimates align closely withprior literature [ 37,41,71,76]. For example, we observe strongpositive correlations with education, income, and population den-sity, and a strong negative correlation with percent Republican(Figure 6c). However, we discover more nuanced trends when welook closer. Demographic trends vary significantly across states(Figure A5), especially for race and ethnicity, and trends changeover time. For example, we estimate that older ZCTAs were muchlikelier to seek the vaccine early in 2021 but this trend fell over time(Figure A6a), reflecting how the US vaccine rollout initially priori-tized seniors [ 38], and we see an increase in vaccine intent frommore Republican ZCTAs in summer 2021 (Figure A6b). Thus, ourclassifier both confirms existing findings and enables new analyseswith finer granularity across regions, demographics, and time.5 SEARCH CONCERNS OF HOLDOUTSWe use our vaccine intent classifier to identify two groups: vaccineearly adopters , who expressed their first vaccine intent before May2021, and vaccine holdouts , who waited until July 2021 to show theirfirst vaccine intent, despite becoming eligible by April.3Comparingthe search interests of these two groups allows us to discover rela-tionships between expressed vaccine concerns, news consumption,and vaccine decision-making. To reduce potential confounding, wematch each holdout with a unique early adopter from the samecounty and with a similar average query count, since we knowthat the populations seeking vaccination changed over time andwe do not want our comparisons to be overpowered by regional ordemographic differences. In our following analyses, we comparethe search interests of the matched sets, with over 200,000 pairs.Vaccine holdouts are more likely to consume untrusted news. First,we analyze the trustworthiness of news sites clicked on by vaccineholdouts versus early adopters. We use ratings from Newsguard,which assigns trust scores to news sites based on criteria suchas how often the site publishes false content and how it handlesthe difference between news and opinion [ 52]. We find that, inthe period while vaccine holdouts were eligible but still holdingout (April to June 2021), holdouts were 69% (95% CI, 67%-70%)likelier than their matched early adopters to click on untrustednews, defined by Newsguard as domains with trust scores below60. Furthermore, we see that as the trust score from Newsguarddegrades, the likelier it was that holdouts clicked on the site, relativeto early adopters (Figure 7a). For example, sites that are known forspreading COVID-19 misinformation, such as Infowars [ 25], RT [ 6],and Mercola [ 31], were much likelier to be clicked on by holdouts.3We did not consider as holdouts those who never showed vaccine intent during ourstudy period, since those users may have gotten their vaccine in ways that are notvisible via search data. In comparison, individuals who did not show their first vaccineintent until July 2021 likely did not receive the vaccine before.6Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(d)(b)(c)Religious concernsExpert anti-vaxHigh-profile anti-vaxEerie fearsExemptionAnti-mandateFake vaccine proof FDA approvalVaccine-caused deathsVaccine developmentTravel restrictionsNatural immunityReproductive healthEfficacy against variantsVaccine proofNews on hesitancyEmployee mandatesDecision-makingBreakthrough casesPost-vax guidelinesSevere side effectsSpecial populationsVaccine incentivesVaccine ratesEfficacy from studiesInfo about J&JInfo about PfizerComparing vaccinesNormal side effectsInfo about ModernaLeans towardsvaccine holdoutLeans towardsearly adopterElevated near vaccine intentReduced near vaccine intentSubcategoryFigure 7: In all subfigures, news/categories are colored from yellow to dark purple to represent most holdout-leaning to mostearly adopter-leaning. (a) The lower the trust rating from Newsguard, the likelier it is that vaccine holdouts click on the newssite, relative to early adopters. (b) Holdouts’ top category concerns include Vaccine Safety, Requirements, and Information, withvarying proportions over time. (c) Comparing holdouts vs. early adopters’ relative probabilities of clicking on each subcategory(from April to June 2021) reveals each group’s distinctive concerns. (d) Near when holdouts express vaccine intent ( ±3 days) inJuly and August 2021, their concerns become much more like the concerns of early adopters, with a few important differences.Ontology of vaccine concerns on search. To characterize vaccine-related search interests in far more detail, we construct a hier-archical ontology of vaccine concerns, defined in terms of 25,000vaccine-related URLs that were clicked on by early adopters or hold-outs. We construct our ontology from the bottom-up: first, we seekto automatically partition the URLs into clusters. Leveraging graphML again, we formulate this as a community detection problemon graphs, and apply the Louvain algorithm [ 12] to the collapsedURL-URL graph (collapsing the bipartite query-click graph overqueries). We find that this approach results in remarkably coher-ent clusters (Table A3), due to the strength of the signal containedin query-click graphs, and outperforms standard topic modelingapproaches such as LDA [ 10]. Based on these clusters, we designa comprehensive set of subcategories and top categories, and sortthe clusters accordingly. For example, we identify one cluster ofnews stories announcing vaccine passport requirements in cities,which we sort under the proof of vaccination subcategory and Vac-cine Requirements top category. This bottom-up approach allowsus to discover and measure vaccine concerns directly from users’search interests and analyze them at multiple scales, providingcomplementary insights to more traditional surveys.In Figure A1, we summarize our resulting ontology, which con-sists of 8 top categories and 36 subcategories. Some top categoriesencompass a number of distinct subcategories: for example, underVaccine Safety, we include normal side effects, severe side effects,concerns about reproductive health, vaccine history and develop-ment, FDA approval, fear of vaccine-caused deaths, and “eerie” fears(e.g., myths about vaccine shedding or becoming magnetic [ 28]).At the top category-level, we find that vaccine holdouts are, by far,the most concerned about Vaccine Safety, which accounts for 23%of their vaccine-related clicks, followed by Vaccine Information(10%) and Vaccine Requirements (9%). We also observe changesin interests over time (Figure 7b): for example, interest in VaccineIncentives increased in May 2021, and interest in Vaccine Effective-ness grew in June 2021, following the spread of the Delta variant.Distinctive concerns of holdouts vs. early adopters. Our ontologyallows us to compare the vaccine concerns of holdouts and theirmatched early adopters. First, during the period from April to June2021, we find that holdouts were 48% less likely than early adoptersto click on any vaccine-related URL. Furthermore, their distributionof concerns within their vaccine-related clicks differed significantly(Figure 7c). Using the subcategories from our ontology, we findthat holdouts were far more interested in religious concerns aboutthe vaccine; anti-vaccine messages from experts and high-profilefigures; avoiding vaccine requirements by seeking exemptions, ban-ning mandates, or obtaining fake proof of vaccination; eerie fearsand vaccine-caused deaths; and FDA approval and vaccine develop-ment. In comparison, early adopters were much more concerned7epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzabout normal side effects, vaccine efficacy, comparing differenttypes of vaccines, and information about each vaccine (Moderna,Pfizer, and Johnson & Johnson). These differences reveal the impor-tance of a fine-grained ontology; for example, at the top categorylevel, we would see that both groups were interested in VaccineSafety but miss that early adopters were more concerned about nor-mal and severe side effects, while holdouts were more concernedabout eerie fears and vaccine-caused deaths. Our approach alsoallows us to study who is expressing these concerns in greater gran-ularity. Even within holdouts, we observe significant variabilityin concerns across demographic groups (Figure A7). For example,holdouts from more Democrat-leaning ZCTAs were particularlyconcerned about FDA approval and vaccine requirements, whileholdouts from more Republican-leaning ZCTAs were more con-cerned about eerie fears and vaccine incentives.Holdouts appear like early adopters when seeking the vaccine.In our final analysis, we exploit the fact that all of our vaccineholdouts eventually expressed vaccine intent to explore how vac-cine concerns change as an individual converts from holdout toadopter. From July to August 2021, we analyze how holdouts’ vac-cine concerns change in the small window ( ±3days) surroundingtheir expressed vaccine intent, compared to their typical concernsoutside of that window. We find that in those windows, holdouts’vaccine concerns nearly reverse, such that they look much morelike early adopters than their typical selves (Figure 7d nearly re-verses 7c). During this time, holdouts become far more interestedin the Johnson & Johnson vaccine, comparing different vaccines,and vaccine incentives, and less interested in anti-vaccine messagesand vaccine fears. Notably, not all early adopter-leaning concernsreverse as dramatically; for example, even while expressing vaccineintent, holdouts remain less interested in the Pfizer and Modernavaccines, which may reflect how vaccine hesitant individuals werequicker to accept the one-shot Johnson & Johnson vaccine, insteadof the two-shot mRNA vaccines [ 21,73]. Furthermore, there aresome early adopter-leaning concerns that holdouts do not pick upon during this time, such as interest in vaccine rates. We hypoth-esize that these concerns are more reflective of an early adopter“persona” rather than of concerns that would become relevant whenseeking the vaccine, such as comparing different vaccines.6 RELATED WORKOur work centers Bing search logs, which have been used to studyother health issues such as shifts in needs and disparities in infor-mation access during the pandemic [ 67,68], health informationneeds in developing nations [ 1], experiences around cancer diag-noses [ 55,56], concerns rising during pregnancy [ 29], and medicalanxieties associated with online search [ 75]. Our efforts build onprior work that extracts insights about the COVID-19 vaccine fromdigital traces, such as social media [ 50,57,58] and aggregated searchtrends [ 7,23,48]. Our work is also related to other efforts to detecthealth conditions online, such as predicting depression from socialmedia [19] and monitoring influenza from search queries [32].Our work seeks to address the challenges of working with digitaltraces [ 24,54] and limitations of prior work [ 32,44] by developingML and human-in-the-loop methods to precisely label search logsand evaluate bias. Furthermore, as one of the first works to use indi-vidual search logs to study the COVID-19 vaccine, we have the rareopportunity to link vaccine outcomes (predicted by our classifier)to the same individual’s search interests. Our graph ML pipeline isalso similar to other “big data” approaches that, due to the scale ofunlabeled data, manually annotate a subset of data, train machinelearning models to accurately predict those labels, then use thosemodels to label the rest of the data [ 17,30,35,47]. We extend thisapproach in several ways, such as by using personalized PageRankto select URLs for more efficient annotation and by setting a strictclassification threshold based on “spies” to ensure high precision.7 DISCUSSIONWe have demonstrated how large-scale search logs and machinelearning can be leveraged for fine-grained, real-time monitoringof vaccine intent rates and identification of individuals’ concernsabout vaccines. There are limitations to our approach: for example,while we can achieve finer granularity than existing data, we stillmiss within-ZCTA heterogeneity in vaccine intent. Furthermore,our efforts to minimize bias in our estimates are substantial butimperfect (e.g., we can only approximate TPRs and FPRs of ourclassifier). We also assume in this work that vaccine intent can bedetected through single queries or clicks, but more sophisticatedmodels could incorporate entire search sessions or browsing databeyond search. However, in favor of simplicity and considerationsof privacy, we label vaccine intent at the query and click-level.Despite these limitations, our resources demonstrate strongagreement with existing data and enable analyses that have not beenavailable before. For example, our fine-grained vaccine intent esti-mates can help public health officials to identify under-vaccinatedcommunities, informing where to place vaccine sites or whom toprioritize in online or real-world outreach programs. Furthermore,our novel ontology and analyses of individuals’ vaccine concernsinform how to intervene, guiding messaging strategies for differentholdout populations. Lastly, our observation that holdouts resembleearly adopters when they eventually seek vaccination indicates thatindividuals might follow similar paths towards vaccine acceptance.Future work could model these trajectories, try to identify key in-fluences (e.g., vaccine mandates), and use these models to ideallyallocate limited resources for interventions.To facilitate policy impact and future research, we are releasingour vaccine intent estimates and our ontology of vaccine concerns.We hope that these resources will be useful for conducting detailedanalyses of COVID-19 vaccine behaviors and vaccination rates. Theontology can also be employed widely in web and social mediaresearch; for example, to study how certain classes of URLs (e.g.,eerie fears) are disseminated on social media or surfaced by searchengines. Finally, we note that our graph ML techniques for intentdetection are applicable beyond vaccines, and could be applied toprecisely detect other intents of interest, such as seeking stimuluschecks or COVID-19 tests. More broadly, we hope that our workcan serve as a roadmap for researchers of how to derive rigorousbehavioral and health insights from search logs, including how toprecisely detect user intents and interests, evaluate and correctfor bias, validate against external data, and release resources topromote reproducibility, transparency, and future work.8Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA
170KJ-xfkY
Very well-motivated problem; well-designed computational study of health policy
5: Top 50% of accepted papers, clear accept
The paper proposes and implements a framework for fine-grained estimation of vaccination rates across geographical locations, vaccine holdouts, and the behavior of vaccine holdouts over time. The authors leverage a combination of search engine query data, aggregate vaccination rates, census data, and news reliability ratings (i.e., Newsguard) for their method. This is a particularly challenging problem due lags in vaccination reporting and self-reporting biases, especially among holdouts. The authors demonstrate that their vaccine intent classifier performs well and correlates with CDC vaccination rates, and conduct a fine-grained analysis of concerns among vaccine holdouts over time. The real-world impact and applicability of this paper is obvious to me. The authors select a very topical and compelling area (COVID-19 vaccination hesitancy) as well. Although my experience is primarily computational, the results seem grounded in vaccine policymaking objectives/priorities as well. This work further provides a template that could be potentially adapted to other policy rollouts both retrospectively (e.g., ACA rollout) and the future, provided that the requisite data sources are available. The comparison of query data between different sources (Bing vs. Google) also addressed my biggest concern — i.e., how representative is the population studied. I also found the breakdowns of vaccine intent by demographic to be very compelling (Fig. 6c, and A5). A few questions about the method: * Since there are so many steps, the pipeline for generating vaccine intent labels seems susceptible to error propagation (i.e., if there is a systematic bias in the human annotators, or earlier in the pipeline) since it depends on the quality of data collected— what checks, in addition to those mentioned in the paper (some human evaluation & comparison of Google vs. Bing query data), were done for systematic biases/other pitfalls at each stage of the pipeline? * It is slightly unclear to me how negative vaccine intent examples were labeled. Is this based on the human annotation method in Sec. 3.2 (i.e., <3 positive annotations), followed by GNN-based label-propagation + spies? What if we label vaccine intent using a simple majority vote method (i.e., 2-1 is sufficient) at the human annotator phase? Are queries that have nothing to do with COVID-19 or vaccinations ever included as negative examples? 
Some further questions about the results: * In Fig. 6a, some counties are shown in white. Is this because the sample size is too small to generate an estimate of vaccine intent? The authors choose Newsguard as a provider of news reliability ratings; however, such ratings are inherently dependent on the rating provider’s specific methodology (i.e., who decides who is more reliable in an increasingly polarized news environment). Are there alternate providers of trust ratings, and are the results robust to such changes? * How were the URL clusters validated? How was model selection (i.e., Louvain over LDA) performed? What is the definition of a “remarkably coherent cluster?” While all of the results look believable, I would have liked to see some measurement of cluster quality here (although this is difficult to do objectively) in addition to the qualitative analysis. Or, is there a human-annotator based way to partially validate these clusters? * I don’t know that “Holdouts appear like early adopters” is the correct framing towards the end of Sec. 5 — I would expect 7d to look much flatter (vertically) if that were the case, which is true for a few of the bars, but instead I mostly notice the reversal. So it seems like the correct conclusion is that some holdouts’ concerns dramatically shift w.r.t. early adopters at some point, while others converge towards early adopters’ concerns. The reversal trend is probably the most interesting piece in my opinion. Additional breakdowns of the results that I would find interesting: * Stratification by area deprivation index, tribal vs. non-tribal, rural vs. urban (Pop/sq. m. is a proxy), access to healthcare (e.g., # of pharmacies offering the vaccine per capita/within 1h) I also wanted to raise a potential ethical consideration for future work — due to the cross-platform aggregation of data required, the potential for privacy violations due to invasive behavioral interventions or discrimination should be considered in my opinion — for example, targeting specific users for misinformation, vaccine providers/pharmacies engaging in implicit adverse selection by targeting specific segments, or discriminatory labor practices based on vaccine status. One could replace the word “vaccine” with “health” for similar studies on health policy as well. Since this study largely consists of retrospective data analysis, the risk to users’ privacy is very small at this stage. While I think the authors exercised due diligence in data ethics via IRB approval, anonymization, dissociation from specific user accounts/profiles, ZIP-level granularity, and ensuring no linkage to other products is possible, I am wondering about the potential for actors that do not exercise the same standards of diligence as the authors to harm users’ privacy. I.e., could a bad actor copy this code and engage in behavioral interventions/discriminatory practices, and what safeguards, computational, legal, or otherwise, exist to mitigate any such threats? Overall, I think the authors did develop a rigorous and well-motivated method for classifying vaccine intent via a multi-stage pipeline featuring regex queries, URL identification via a combination of PPR, human annotation, a GNN, and the Spy technique from PU learning. The fine-grained analysis of the model's predictions then provide insights into vaccine hesitancy rates, and how concerns of vaccine holdouts change over time. I find that this is already a well-motivated, clear, and well-written computational study of vaccination policy, and addressing the above would simply strengthen the work further in my opinion.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
PhAOtEHLo1
KDD.org/2023/Workshop/epiDAMIK
2023
Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)
["Jes\u00fas Rufino", "Juan Marcos Ramirez", "Jos\u00e9 Aguilar", "Carlos Baquero", "Jaya Champati", "Davide Frey", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
During the global pandemic crisis, several COVID-19 diagnosis methods based on survey information have been proposed with the purpose of providing medical staff with quick detection tools that allow them to efficiently plan the limited healthcare resources. In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reported symptoms. In addition, these methods have been evaluated using datasets extracted from different studies with different characteristics. On the other hand, the University of Maryland, in partnership with Facebook, launched the Global COVID-19 Trends and Impact Survey (UMD-CTIS), the largest health surveillance tool to date that has collected information from 114 countries/territories from April 2020 to June 2022. This survey collected information on various individual features including gender, age groups, self-reported symptoms, isolation measures, and mental health status, among others. In this paper, we compare the performance of different COVID-19 diagnosis methods using the information collected by UMD-CTIS, for the years 2020 and 2021, in six countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa. The evaluation of these methods with homogeneous data across countries and years provides a solid and consistent comparison among them.
["COVID-19 diagnosis", "F1-score", "light gradient boosting machine", "logistic regression", "rule-based methods."]
ABSTRACTDuring the global pandemic crisis, several COVID-19 diagnosismethods based on survey information have been proposed withthe purpose of providing medical staff with quick detection toolsthat allow them to efficiently plan the limited healthcare resources.In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reportedsymptoms. In addition, these methods have been evaluated usingdatasets extracted from different studies with different characteris-tics. On the other hand, the University of Maryland, in partnershipwith Facebook, launched the Global COVID-19 Trends and ImpactSurvey (UMD-CTIS), the largest health surveillance tool to datethat has collected information from 114 countries/territories fromApril 2020 to June 2022. This survey collected information on vari-ous individual features including gender, age groups, self-reportedsymptoms, isolation measures, and mental health status, amongothers. In this paper, we compare the performance of differentCOVID-19 diagnosis methods using the information collected byUMD-CTIS, for the years 2020 and 2021, in six countries: Brazil,Canada, Israel, Japan, Turkey, and South Africa. The evaluation ofthese methods with homogeneous data across countries and yearsprovides a solid and consistent comparison among them.KEYWORDSCOVID-19 diagnosis, F1-score, light gradient boosting machine,logistic regression, rule-based methods.1 INTRODUCTIONIn December 2019, the coronavirus disease 2019 (COVID-19) emergedin China caused by the severe acute respiratory syndrome coron-avirus 2 (SARS-CoV-2) [17]. Within a few months, this disease ledto a global pandemic crisis that has challenged national health-care systems [ 6]. More precisely, by June 2023, the cumulativenumber of confirmed cases worldwide exceeded 688 million, andofficially over 6,800,000 people have died from COVID-19; https://www.worldometers.info/coronavirus/. In this context, the plan-ning of the healthcare resources (e.g., the estimation of the numberof hospital beds or intensive care units needed for COVID-19 pa-tients) has been determined by the availability of quick and efficientinstruments for the diagnosis of active cases.Thereverse transcriptase-polymerase chain reaction (RT-PCR) testhas been considered the standard tool to detect infected people [ 5].However, real-time disease monitoring based on the RT-PCR test de-mands material and human resources that are not always available.To overcome these limitations, various diagnosis methods based onsurvey information have been proposed that combine multiple indi-vidual features (age, gender, symptoms, demographic data, etc.) tocharacterize COVID-19-infected people [ 1–4,9–12,14–16,18,19].Specifically, most of these methods propose simple rules or buildmachine learning models that evaluate a set of individual attributesto determine a COVID-19-positive case. However, a consistent com-parison framework that evaluates the performance yielded by thedifferent methods is missing since the generated models and thecorresponding conclusions are assessed using different datasetsthat are heterogeneous in size and type.On the other hand, in April 2020, the University of MarylandGlobal COVID-19 Trends and Impact Survey (UMD-CTIS), in part-nership with Facebook, launched the largest global health surveil-lance platform to date [ 8]. More precisely, this project stored theresponses provided by a subset of Facebook invited users aboutdifferent topics related to the COVID-19 pandemic such as the pres-ence of symptoms, RT-PCR outcomes, and vaccination acceptance,among others. This data collection instrument was available in 56languages and it recorded tens of millions of responses from 114countries or territories worldwide.In this paper, we conduct a consistent comparison of differentmethods that detect COVID-19-positive cases from a combinationof features collected from surveys. To this end, we take into accountthe information included in the UMD-CTIS records extracted fromsix countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa.For each country, the models are trained using a randomly selectedsubset of tested individuals who reported at least one symptom.Furthermore, we compare the performance for two years: 2020and 2021, which represent two different periods of the pandemicwithout and with vaccination, respectively. We compare the de-tection methods using four performance metrics: F1-score, sensi-tivity, specificity, and precision (only F1-score is presented in thisextended abstract). Overall, the detection methods exhibiting thebest performances across different groups and metrics are Mika[10] (F1-score: 59.33%),Astley [3] (F1-score: 59.22%),Smith [16](F1-score: 59.22%),Bhattacharya [4] (F1-score: 58.69%),Roland[12] (F1-score: 58.20%),Shoer [15] (F1-score: 58.15%),Menni_1 [9](F1-score: 57.03%), and Menni_2 [9] (F1-score: 56.94%).2 MATERIALS AND METHODS2.1 UMD-CTIS SurveyWe perform a consistent comparative study of various COVID-19active case detection methods from data provided by the UMD-CTISsurvey. More precisely, since April 23, 2020, Facebook worldwideusers were invited to participate in the UMD-CTIS survey. Userswho accepted the invitation were moved to a web survey platform,where potential participants must report age > 18 and consent ofdata use before responding to the survey. The survey instrumentconsists of a web-based questionnaire collecting information onJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández Antagender, age groups, symptoms, COVID testing, isolation, and vac-cination, among others. Furthermore, the survey instrument wascontinuously updated to aggregate new items. Finally, UMD orga-nized and stored daily microdata that were further processed todevelop our comparative study.2.2 Comparative study designIn this work, we compare the performance of various COVID-19 de-tection methods using the information provided by UMD-CTIS dataextracted from six countries: Brazil, Canada, Israel, Japan, Turkey,and South Africa. These countries are selected based on geographi-cal diversity and the large amount of available data. In addition, thiscomparative study is performed for two non-overlapped periods:(2020) from April 23 to December 31, 2020, and (2021) from January1 to December 31, 2021. Notice that the end of 2020 matches thestart of the first COVID-19 vaccination campaigns. Therefore, wecan compare the performance of the detection methods withoutand with information on vaccination. Table 1 summarizes the char-acteristics of the study population for the various countries and forthe two periods under test.For every country and period, we build a dataset by picking theanswers reporting lab test results in the last 14 days (the surveydoes not collect the test type) and at least one potential COVID-19symptom, i.e., this comparative study selects the tested and symp-tomatic cases. We select symptomatic cases because feature-basedpredictive methods typically aim at finding the combination ofsymptoms that detect infected people. In addition, we choose thetested individuals with the aim of obtaining the ground truth sam-ple set that allows us to evaluate the performance of the differentmethods quantitatively. Since questionnaires contain categoricaldata, we apply binary encoding (dummy coding) to each response.This leads to datasets with 201 features (attributes, columns, or vari-ables) for 2020, and the datasets have between 431 and 452 columnsfor 2021 depending on the selected country. For each dataset, thisstudy evaluates the performance of the various COVID-19 activecase detection methods. To this end, our study divided every datasetinto 100 partitions. For each trial, 80 %of the dataset rows (ques-tionnaires or samples) were randomly selected as training samples,and the remaining 20 %were used to test the various methods.2.3 Detection methods under comparisonIn this work, we compare the performance of various COVID-19diagnosis methods belonging to three categories:(1)Rule-based methods: CDC [ 1], WHO [ 18], Akimbami [ 2],Solomon [14], Perez [11].(2)Logistic regression techniques: Menni [ 9], Roland [ 12], Smith[16], Shoer [15], Bhattacharya [4], Mika [10].(3)Tree-based machine-learning models: Zoabi [ 19], Astley [ 3].In this work, we have implemented two versions of the Mennimethod and two versions of the Zoabi method. Note that UMD-CTIS data did not register whether the respondent skipped meals.Therefore, we modified the Menni method by fixing the skippedmeals variable to zero ( Menni_1 ). Furthermore, we followed theprocedure reported in [ 9] to build the logistic regression modelfrom individual features available in our dataset ( Menni_2 ). Inother words, we built a regression model that considers the features:age, gender, loss of smell and taste, cough, and fatigue. In the caseof the Zoabi method, notice that UMD-CTIS data ranges of agesdo not have a boundary at 60. The boundary is either at 55 or 65.We have created two different models, one for ages greater than55 years ( Zoabi_55 ) and the other for ages greater than 65 years(Zoabi_65 ). Further information regarding the methods under testcan be found in the corresponding references and in the full versionof the article [13].2.4 Benchmarking detection methodsFirst, we use the F1-score to quantitatively assess the performanceof the various detection methods. To this end, our procedure firstlyobtains the predictions over the test set for each trial. From the pre-dicted estimates and the ground truth data, the procedure identifiesthe number of true positives TP, false positives FP, true negativesTN, and false negatives FN. Then, the F1-score is obtained as fol-lows:F1=2TP2TP+FP+FN. (1)Tables 2 and 3 display the ensemble average and the CI of theF1-score for the five countries and for 2020 and 2021, respectively.Specifically, each value in these tables is obtained by averaging100 realizations of the corresponding experiment. Tables with thesensitivity, specificity, and precision values obtained are includedin the full version of the article [13].3 RESULTSAs can be seen in Table 1, 83,238respondents from Brazil reported atest outcome and at least one symptom in 2020. In this cohort, 44,963participants reported a positive test result, and 38,275respondentshad a negative test outcome. Table 1 also includes the test positiverate (TPR) where TPR=(100×positive)/(Tested symptomatic ).For example, the TPR for Brazil 2020 is 54.02%. On the other hand,for Brazil 2021, the dataset was extracted from 262,683participantswho reported at least one symptom and the outcome of a test donein the last 14 days. In this case, 106,471respondents reported apositive test result, and 156,212questionnaires informed a negativetest outcome with a TPR of 40.53%. In summary, the number oftested symptomatic, the number of positive cases, and the numberof negative results for the remaining countries in 2020 and 2021are displayed in Table 1. Additionally, Table 1 shows informationabout other individual features such as gender and age groups.Table 2 shows the ensemble averages with the corresponding 95%confidence intervals (CI) of the F1score yielded by the various detec-tion methods for the different countries and for 2020. In particular,the methods the best F1scores for each country are: Brazil ( Astley :73.72%), Canada ( Menni_1 :54.33%), Israel ( Bhattacharya :62.78%),Japan ( Menni_1 :46.33%), Turkey ( Bhattacharya :67.67%), andSouth Africa ( Roland :67.32%). The F1score in %and the CIsobtained for 2021 are displayed in Table 3. For 2021, the best F1scores are: Brazil ( Menni_2 :66.54%), Canada ( Smith :50.28%), Is-rael ( Bhattacharya :58.76%), Japan ( Mika :52.41%), Turkey ( Bha-ttacharya :64.61%), and South Africa ( Menni_2 :66.50%). As ob-served in Tables 2 and 3, none of the methods achieved an F1scoreof74%or above, indicating that no model is very good. According toTable 1, Brazil, Turkey, and South Africa exhibit TPR values at leasttwofold higher than those obtained from Canada, Israel, and Japan.Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)Table 1: Characteristics of the study population for the various countries and for two non-overlapped periods (2020 and 2021).CharacteristicBrazil Canada Israel Japan Turkey South Africa2020 2021 2020 2021 2020 2021 2020 2021 2020 2021 2020 20211. Tested symptomatic, N 83238 262683 8927 33997 5944 19063 4698 41010 15952 28896 7883 230382. Test outcome(a) Positive, N 44963 106471 838 3433 1238 2869 532 4011 6167 9228 2866 8459(b) Negative, N 38275 156212 8089 30564 4706 16194 4166 36999 9785 19668 5017 14579(c) TPR, % 54.02 40.53 9.39 10.10 20.83 15.05 11.32 9.78 38.66 31.94 36.35 36.713. Gender(a) Female, N 45357 130235 5438 19472 2941 9290 1679 14283 3939 7185 3923 11291(b) Male, N 24928 76689 2315 9824 2199 6746 2388 20791 8920 15292 2525 67304. Age groups(a) 18-24, N 8270 27474 1136 3248 583 1498 179 871 1716 2267 739 1580(b) 25-34, N 19596 56227 2337 7172 1144 3069 577 3797 4375 5756 2252 4889(c) 35-44, N 21061 57452 1750 6688 1041 3333 997 7527 4043 7110 1801 4721(d) 45-54, N 13776 39122 1210 5215 933 3115 1216 10413 2071 4594 1141 3878(e) 55-64, N 6968 22190 954 4478 880 2634 828 8724 862 2400 491 2124(f) 65-74, N 140 6016 308 2421 510 1957 479 3529 158 719 1667 799(g) 75+, N 233 1025 126 825 143 627 66 846 21 134 27 230Table 2: F1score and its 95%confidence interval for the selected countries for 2020, in %.Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 65.56 (65.48 - 65.64) 54.33 (53.66 - 54.99) 59.76 (59.16 - 60.36) 46.33 (45.33 - 47.33) 63.93 (63.68 - 64.17) 61.39 (61.07 - 61.70)Menni_2 71.13 (71.01 - 71.24) 49.33(48.77 - 49.88) 57.50 (57.04 - 57.97) 39.91 (39.27 - 40.54) 67.41 (67.21 - 67.60) 66.36 (66.10 - 66.62)Roland 69.38 (69.30 - 69.46) 51.44 (50.86 - 52.02) 61.93 (61.46 - 62.41) 40.68 (39.98 - 41.39) 67.06 (66.87 - 67.26) 67.32 (67.05 - 67.58)Smith 71.11 (71.05 - 71.18) 53.43 (52.85 - 54.01) 62.47 (61.98 - 62.97) 45.12 (44.42 - 45.82) 67.30 (67.11 - 67.49) 62.06 (61.80 - 62.32)Zoabi_55 70.71 (70.65 - 70.77) 32.96 (32.37 - 33.54) 47.76 (47.32 - 48.20) 29.95 (29.29 - 30.60) 57.86 (57.69 - 58.03) 59.05 (58.80 - 59.31)Zoabi_65 70.73 (70.67 - 70.79) 32.86 (32.28 - 33.44) 47.79 (47.36 - 48.23) 29.91 (29.27 - 30.55) 57.72 (57.55 - 57.88) 59.00 (58.74 - 59.25)CDC 73.42 (73.36 - 73.48) 23.43 (23.14 - 23.72) 45.84 (45.46 - 46.21) 27.38 (27.00 - 27.75) 62.60 (62.42 - 62.78) 62.13 (61.88 - 62.39)Shoer 70.45 (70.39 - 70.52) 50.95 (50.37 - 51.54) 62.41 (61.93 - 62.89) 44.57 (43.86 - 45.28) 67.49 (67.30 - 67.69) 66.76 (66.52 - 67.00)Bhattacharya 69.77 (69.70 - 69.83) 51.90 (51.31 - 52.50) 62.78 (62.30 - 63.26) 39.41 (38.84 - 39.97) 67.67 (67.48 - 67.87) 66.81 (66.52 - 67.10)WHO 23.92 (23.83 - 24.01) 24.08 (23.45 - 24.70) 24.69 (24.15 - 25.24) 27.29 (26.52 - 28.06) 25.14 (24.90 - 25.38) 30.97 (30.59 - 31.35)Perez 59.47 (59.39 - 59.55) 45.20 (44.56 - 45.83) 52.27 (51.71 - 52.82) 32.93 (32.23 - 33.64) 58.12 (57.89 - 58.35) 61.00 (60.70 - 61.30)Mika 69.43 (69.37 - 69.49) 51.43 (50.86 - 52.01) 62.16 (61.68 - 62.63) 45.29 (44.65 - 45.94) 67.08 (66.89 - 67.28) 66.40 (66.13 - 66.68)Akinbami_1 12.85 (12.77 - 12.94) 11.33 (10.72 - 11.93) 10.22 (9.82 - 10.62) 13.38 (12.58 - 14.18) 11.48 (11.26 - 11.70) 17.70 (17.34 - 18.07)Akinbami_2 14.69 (14.60 - 14.78) 9.41 (8.89 - 9.92) 9.59 (9.16 - 10.01) 13.16 (12.35 - 13.98) 10.81 (10.60 - 11.03) 17.14 (16.80 - 17.49)Akinbami_3 27.84 (27.73 - 27.94) 20.23 (19.66 - 20.81) 21.67 (21.14- 22.19) 18.98 (18.22 - 19.73) 26.31 (26.05 - 26.56) 28.93 (28.57 - 29.29)Salomon 30.97 (30.87 - 31.07) 25.52 (24.84 - 26.20) 27.12 (26.58 - 27.66) 30.64 (29.93 - 31.35) 28.36 (28.10 - 28.61) 39.35 (38.98 - 39.72)Astley 73.72 (73.65 - 73.78) 48.29 (47.58 - 49.00) 62.47 (61.98 - 62.97) 44.13 (43.32 - 44.93) 67.45 (67.24 - 67.65) 66.85 (66.61 - 67.09)Since the F1score is highly affected by imbalanced classes [ 7], wecomputed the averages of the F1score yielded by the detectionmethods for three groups: the broad set of the six countries, theset of countries with high TPR (Brazil, Turkey, and South Africa)and low TPR (Canada, Israel, and Japan) for 2020, 2021, and theentire interval 2020-2021 (Table 4). For 2020, when there was novaccination yet, the most efficient method was Astley (Average:60.49%). In the Astley method, the most relevant are cough, stuffyor runny nose, aches or muscle pain, headache, sore throat, andfever. In 2021, when vaccination began, Mika was the most effectivemethod (Average: 58.35%). In the Mika method, fever, cough, lossof taste and smell, and gastrointestinal problems are consideredfor COVID-19 detection. In the full article [ 13], we compared thevarious detection methods in terms of sensitivity, specificity, andprecision.4 CONCLUSIONSIn this work, we conduct a comparison of various COVID-19 diagno-sis methods based on survey information using datasets extractedfrom the global UMD-CTIS survey. More precisely, we comparethe different methods for six countries and two periods (with andwithout vaccines) using the F1score as a performance metric. Fromthese results, we highlight the techniques showing the best F1score.It is important to mention that, as can be seen in Tables 2 and 3,none of the methods achieve an F1score above 75%indicating thatno model has a superior performance.Additional results and a more extended discussion can be foundin the full version of the article [13].5 ETHICAL DECLARATIONThe Ethics Board (IRB) of IMDEA Networks Institute gave ethi-cal approval for this work on 2021/07/05. IMDEA Networks hassigned Data Use Agreements with Facebook and the Universityof Maryland (UMD) to access their data, specifically, UMD project1587016-3 entitled C-SPEC: Symptom Survey: COVID-19 entitledILI Community-Surveillance Study. The data used in this study wascollected by the University of Maryland through The University ofMaryland Social Data Science Center Global COVID-19 Trends andImpact Survey in partnership with Facebook. Informed consent hasbeen obtained from all participants in this survey by this institution.All the methods in this study have been carried out in accordancewith relevant ethics and privacy guidelines and regulations.6 AVAILABILITY OF DATA AND MATERIALSThe data presented in this paper (in aggregated form) and theprograms used to process it will be openly accessible at https://github.com/GCGImdea/coronasurveys/. The microdata of the CTISsurvey from which the aggregated data was obtained cannot beshared, as per the Data Use Agreements signed with Facebook andthe University of Maryland (UMD).7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández AntaTable 3: F1score and its 95%confidence interval for the selected countries for 2021, in %Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 59.24 (59.18 - 59.31) 49.38 (49.02- 49.74) 57.31 (56.96 - 57.65) 49.24 (49.16 - 49.83) 59.65 (59.44 - 59.87) 58.28 (58.06 - 58.50)Menni_2 66.54 (66.49 - 66.59) 39.82 (39.59 - 40.05) 53.46 (53.21 - 53.70) 42.60 (42.37 - 42.84) 62.71 (62.56 - 62.85) 66.50 (66.33 - 66.68)Roland 65.76 (65.71 - 65.82) 46.28 (46.03 - 46.53) 57.16 (56.86 - 57.46) 42.82 (42.62 - 43.03) 64.13 (63.96 - 64.31) 64.41 (64.23 - 64.59)Smith 63.37 (63.32 - 63.42) 50.28 (49.99 - 50.57) 58.00 (57.68 - 58.33) 51.48 (51.23 -51.74) 64.38 (64.21 - 64.55) 61.62 (61.45 - 61.80)Zoabi_55 59.83 (59.79 - 59.88) 37.31 (37.01 - 37.60) 39.63 (39.28 - 39.98) 33.71 (33.45 - 33.98) 52.14 (51.88 - 52.40) 59.62 (59.47 - 59.77)Zoabi_65 59.78 (59.74 - 59.83) 37.10 (36.81 - 37.39) 39.64 (39.29 - 39.99) 33.36 (33.11 - 33.62) 52.06 (51.80 - 52.31) 59.54 (59.38 - 59.69)CDC 63.22 (63.17 - 63.26) 27.41 (27.28 - 27.55) 38.78 (38.59 - 38.97) 28.54 (28.40 - 28.68) 55.96 (55.81 - 56.11) 61.25 (61.10 - 61.39)Shoer 65.81 (65.76 - 65.87) 41.10 (40.84 - 41.36) 53.67 (53.37 - 53.97) 45.42 (45.07 - 45.78) 64.18 (64.01 - 64.35) 64.97 (64.80 - 65.15)Bhattacharya 64.16 (64.11 - 64.22) 49.22 (48.96 - 49.49) 58.76 (58.48 - 59.03) 45.82 (45.59 - 46.05) 64.61 (64.44 - 64.78) 63.40 (63.22 - 63.59)WHO 23.62 (23.56 - 23.68) 26.01 (25.66 - 26.35) 27.92 (27.59 - 28.24) 34.05 (33.74 - 34.37) 27.72 (27.49 - 27.94) 32.78 (32.58 - 32.98)Perez 54.85 (54.79 - 54.90) 44.70 (44.40 - 45.00) 51.27 (50.93 - 51.61) 39.72 (39.45 - 40.00) 56.03 (55.86 - 56.21) 59.17 (58.98 - 59.35)Mika 65.33 (65.28 - 65.38) 46.76 (46.40 - 47.12) 57.50 (57.22 - 57.79) 52.41 (51.73 - 53.09) 64.13 (63.96 - 64.31) 63.98 (63.81 - 64.15)Akinbami_1 12.02 (11.96 - 12.07) 11.43 (11.17 - 11.70) 10.60 (10.33 - 10.88) 11.11 (10.82 - 11.39) 13.86 (13.69 - 14.03) 15.86 (15.66 - 16.06)Akinbami_2 12.02 (12.05 - 12.16) 8.03 (7.79 - 8.27) 11.48 (11.20 - 11.75) 9.10 (8.83 - 9.31) 11.80 (11.64 - 11.96) 13.61 (13.44 - 13.79)Akinbami_3 26.59 (26.00 - 26.11) 20.96 (20.64 - 21.27) 21.96 21.62 - 22.30) 19.90 (19.63 - 20.17) 26.35 (26.12 - 26.58) 28.08 (27.85 - 28.31)Salomon 30.15 (30.11 - 30.24) 28.06 (27.70 - 28.43) 30.72 (30.39 - 31.05) 37.27 (36.97 - 37.57) 31.31 (31.09 - 31.53) 38.03 (37.83 - 38.23)Astley 65.95 (65.90 - 66.01) 45.07 (44.74 - 45.40) 58.62 (58.29 - 58.94) 50.39 (50.08 - 50.70) 63.67 (63.50 - 63.85) 64.06 (63.88 - 64.24)Table 4: Average F1score (in %) for three country groups: theoverall six countries (overall), the countries with high TPR(High TPR: Brazil, Turkey, and South Africa), and the coun-tries with low TPR (Low TPR: Canada, Israel, and Japan) for2020, 2021, 2020-2021.2020 2021 2020-2021Method OverallLow TPROverallLow HighOverallLow HighTPR TPR TPR TPR TPR TPRMenni_1 58.55 53.47 63.63 55.52 51.98 59.06 57.03 52.73 61.34Menni_2 58.61 48.91 68.30 55.27 45.29 65.25 56.94 47.10 66.78Roland 59.64 51.35 67.92 56.76 48.75 64.77 58.20 50.05 66.34Smith 60.25 53.67 66.82 58.19 53.25 63.12 59.22 53.46 64.97Zoabi_55 49.72 36.89 62.54 47.04 36.88 57.20 48.38 36.89 59.87Zoabi_65 49.67 36.85 62.48 46.91 36.70 57.13 48.29 36.78 59.81CDC 49.13 32.22 66.05 45.86 31.58 60.14 47.50 31.90 63.10Shoer 60.44 52.64 68.23 55.86 46.73 64.99 58.15 49.69 66.61Bhattacharya 59.72 51.36 68.08 57.66 51.27 64.06 58.69 51.32 66.07WHO 26.02 25.35 26.68 28.68 29.33 28.04 27.35 27.34 27.36Perez 51.50 43.47 59.53 50.96 45.23 56.68 51.23 44.35 58.11Mika 60.30 52.96 67.64 58.35 52.22 64.48 59.33 52.59 66.06Akinbami_1 12.83 11.64 14.01 12.48 11.05 13.91 12.65 11.35 13.96Akinbami_2 12.47 10.72 14.21 11.02 9.54 12.51 11.75 10.13 13.36Akinbami_3 23.99 20.29 27.69 23.97 20.94 27.01 23.98 20.62 27.35Salomon 30.33 27.76 32.89 32.59 32.02 33.16 31.46 29.89 33.03Astley 60.49 51.63 69.34 57.96 51.36 64.56 59.22 51.50 66.95Research Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.
cqVYZkoZHY
Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection
3: Marginally above acceptance threshold
This paper compares the accuracy of many methods that detect COVID-19-positive cases. Most of these methods either propose simple rules or build machine learning models that determine a COVID-19-positive case based on certain individual attributes. It is not entirely clear but I believe the authors train the ML based models on the same dataset (UMD-CTIS Survey) by splitting it randomly and using 80% for training and 20% for evaluating the performance. I do not understand exactly why this particular method of comparison is desirable.
1: The reviewer's evaluation is an educated guess
PhAOtEHLo1
KDD.org/2023/Workshop/epiDAMIK
2023
Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)
["Jes\u00fas Rufino", "Juan Marcos Ramirez", "Jos\u00e9 Aguilar", "Carlos Baquero", "Jaya Champati", "Davide Frey", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
During the global pandemic crisis, several COVID-19 diagnosis methods based on survey information have been proposed with the purpose of providing medical staff with quick detection tools that allow them to efficiently plan the limited healthcare resources. In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reported symptoms. In addition, these methods have been evaluated using datasets extracted from different studies with different characteristics. On the other hand, the University of Maryland, in partnership with Facebook, launched the Global COVID-19 Trends and Impact Survey (UMD-CTIS), the largest health surveillance tool to date that has collected information from 114 countries/territories from April 2020 to June 2022. This survey collected information on various individual features including gender, age groups, self-reported symptoms, isolation measures, and mental health status, among others. In this paper, we compare the performance of different COVID-19 diagnosis methods using the information collected by UMD-CTIS, for the years 2020 and 2021, in six countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa. The evaluation of these methods with homogeneous data across countries and years provides a solid and consistent comparison among them.
["COVID-19 diagnosis", "F1-score", "light gradient boosting machine", "logistic regression", "rule-based methods."]
ABSTRACTDuring the global pandemic crisis, several COVID-19 diagnosismethods based on survey information have been proposed withthe purpose of providing medical staff with quick detection toolsthat allow them to efficiently plan the limited healthcare resources.In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reportedsymptoms. In addition, these methods have been evaluated usingdatasets extracted from different studies with different characteris-tics. On the other hand, the University of Maryland, in partnershipwith Facebook, launched the Global COVID-19 Trends and ImpactSurvey (UMD-CTIS), the largest health surveillance tool to datethat has collected information from 114 countries/territories fromApril 2020 to June 2022. This survey collected information on vari-ous individual features including gender, age groups, self-reportedsymptoms, isolation measures, and mental health status, amongothers. In this paper, we compare the performance of differentCOVID-19 diagnosis methods using the information collected byUMD-CTIS, for the years 2020 and 2021, in six countries: Brazil,Canada, Israel, Japan, Turkey, and South Africa. The evaluation ofthese methods with homogeneous data across countries and yearsprovides a solid and consistent comparison among them.KEYWORDSCOVID-19 diagnosis, F1-score, light gradient boosting machine,logistic regression, rule-based methods.1 INTRODUCTIONIn December 2019, the coronavirus disease 2019 (COVID-19) emergedin China caused by the severe acute respiratory syndrome coron-avirus 2 (SARS-CoV-2) [17]. Within a few months, this disease ledto a global pandemic crisis that has challenged national health-care systems [ 6]. More precisely, by June 2023, the cumulativenumber of confirmed cases worldwide exceeded 688 million, andofficially over 6,800,000 people have died from COVID-19; https://www.worldometers.info/coronavirus/. In this context, the plan-ning of the healthcare resources (e.g., the estimation of the numberof hospital beds or intensive care units needed for COVID-19 pa-tients) has been determined by the availability of quick and efficientinstruments for the diagnosis of active cases.Thereverse transcriptase-polymerase chain reaction (RT-PCR) testhas been considered the standard tool to detect infected people [ 5].However, real-time disease monitoring based on the RT-PCR test de-mands material and human resources that are not always available.To overcome these limitations, various diagnosis methods based onsurvey information have been proposed that combine multiple indi-vidual features (age, gender, symptoms, demographic data, etc.) tocharacterize COVID-19-infected people [ 1–4,9–12,14–16,18,19].Specifically, most of these methods propose simple rules or buildmachine learning models that evaluate a set of individual attributesto determine a COVID-19-positive case. However, a consistent com-parison framework that evaluates the performance yielded by thedifferent methods is missing since the generated models and thecorresponding conclusions are assessed using different datasetsthat are heterogeneous in size and type.On the other hand, in April 2020, the University of MarylandGlobal COVID-19 Trends and Impact Survey (UMD-CTIS), in part-nership with Facebook, launched the largest global health surveil-lance platform to date [ 8]. More precisely, this project stored theresponses provided by a subset of Facebook invited users aboutdifferent topics related to the COVID-19 pandemic such as the pres-ence of symptoms, RT-PCR outcomes, and vaccination acceptance,among others. This data collection instrument was available in 56languages and it recorded tens of millions of responses from 114countries or territories worldwide.In this paper, we conduct a consistent comparison of differentmethods that detect COVID-19-positive cases from a combinationof features collected from surveys. To this end, we take into accountthe information included in the UMD-CTIS records extracted fromsix countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa.For each country, the models are trained using a randomly selectedsubset of tested individuals who reported at least one symptom.Furthermore, we compare the performance for two years: 2020and 2021, which represent two different periods of the pandemicwithout and with vaccination, respectively. We compare the de-tection methods using four performance metrics: F1-score, sensi-tivity, specificity, and precision (only F1-score is presented in thisextended abstract). Overall, the detection methods exhibiting thebest performances across different groups and metrics are Mika[10] (F1-score: 59.33%),Astley [3] (F1-score: 59.22%),Smith [16](F1-score: 59.22%),Bhattacharya [4] (F1-score: 58.69%),Roland[12] (F1-score: 58.20%),Shoer [15] (F1-score: 58.15%),Menni_1 [9](F1-score: 57.03%), and Menni_2 [9] (F1-score: 56.94%).2 MATERIALS AND METHODS2.1 UMD-CTIS SurveyWe perform a consistent comparative study of various COVID-19active case detection methods from data provided by the UMD-CTISsurvey. More precisely, since April 23, 2020, Facebook worldwideusers were invited to participate in the UMD-CTIS survey. Userswho accepted the invitation were moved to a web survey platform,where potential participants must report age > 18 and consent ofdata use before responding to the survey. The survey instrumentconsists of a web-based questionnaire collecting information onJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández Antagender, age groups, symptoms, COVID testing, isolation, and vac-cination, among others. Furthermore, the survey instrument wascontinuously updated to aggregate new items. Finally, UMD orga-nized and stored daily microdata that were further processed todevelop our comparative study.2.2 Comparative study designIn this work, we compare the performance of various COVID-19 de-tection methods using the information provided by UMD-CTIS dataextracted from six countries: Brazil, Canada, Israel, Japan, Turkey,and South Africa. These countries are selected based on geographi-cal diversity and the large amount of available data. In addition, thiscomparative study is performed for two non-overlapped periods:(2020) from April 23 to December 31, 2020, and (2021) from January1 to December 31, 2021. Notice that the end of 2020 matches thestart of the first COVID-19 vaccination campaigns. Therefore, wecan compare the performance of the detection methods withoutand with information on vaccination. Table 1 summarizes the char-acteristics of the study population for the various countries and forthe two periods under test.For every country and period, we build a dataset by picking theanswers reporting lab test results in the last 14 days (the surveydoes not collect the test type) and at least one potential COVID-19symptom, i.e., this comparative study selects the tested and symp-tomatic cases. We select symptomatic cases because feature-basedpredictive methods typically aim at finding the combination ofsymptoms that detect infected people. In addition, we choose thetested individuals with the aim of obtaining the ground truth sam-ple set that allows us to evaluate the performance of the differentmethods quantitatively. Since questionnaires contain categoricaldata, we apply binary encoding (dummy coding) to each response.This leads to datasets with 201 features (attributes, columns, or vari-ables) for 2020, and the datasets have between 431 and 452 columnsfor 2021 depending on the selected country. For each dataset, thisstudy evaluates the performance of the various COVID-19 activecase detection methods. To this end, our study divided every datasetinto 100 partitions. For each trial, 80 %of the dataset rows (ques-tionnaires or samples) were randomly selected as training samples,and the remaining 20 %were used to test the various methods.2.3 Detection methods under comparisonIn this work, we compare the performance of various COVID-19diagnosis methods belonging to three categories:(1)Rule-based methods: CDC [ 1], WHO [ 18], Akimbami [ 2],Solomon [14], Perez [11].(2)Logistic regression techniques: Menni [ 9], Roland [ 12], Smith[16], Shoer [15], Bhattacharya [4], Mika [10].(3)Tree-based machine-learning models: Zoabi [ 19], Astley [ 3].In this work, we have implemented two versions of the Mennimethod and two versions of the Zoabi method. Note that UMD-CTIS data did not register whether the respondent skipped meals.Therefore, we modified the Menni method by fixing the skippedmeals variable to zero ( Menni_1 ). Furthermore, we followed theprocedure reported in [ 9] to build the logistic regression modelfrom individual features available in our dataset ( Menni_2 ). Inother words, we built a regression model that considers the features:age, gender, loss of smell and taste, cough, and fatigue. In the caseof the Zoabi method, notice that UMD-CTIS data ranges of agesdo not have a boundary at 60. The boundary is either at 55 or 65.We have created two different models, one for ages greater than55 years ( Zoabi_55 ) and the other for ages greater than 65 years(Zoabi_65 ). Further information regarding the methods under testcan be found in the corresponding references and in the full versionof the article [13].2.4 Benchmarking detection methodsFirst, we use the F1-score to quantitatively assess the performanceof the various detection methods. To this end, our procedure firstlyobtains the predictions over the test set for each trial. From the pre-dicted estimates and the ground truth data, the procedure identifiesthe number of true positives TP, false positives FP, true negativesTN, and false negatives FN. Then, the F1-score is obtained as fol-lows:F1=2TP2TP+FP+FN. (1)Tables 2 and 3 display the ensemble average and the CI of theF1-score for the five countries and for 2020 and 2021, respectively.Specifically, each value in these tables is obtained by averaging100 realizations of the corresponding experiment. Tables with thesensitivity, specificity, and precision values obtained are includedin the full version of the article [13].3 RESULTSAs can be seen in Table 1, 83,238respondents from Brazil reported atest outcome and at least one symptom in 2020. In this cohort, 44,963participants reported a positive test result, and 38,275respondentshad a negative test outcome. Table 1 also includes the test positiverate (TPR) where TPR=(100×positive)/(Tested symptomatic ).For example, the TPR for Brazil 2020 is 54.02%. On the other hand,for Brazil 2021, the dataset was extracted from 262,683participantswho reported at least one symptom and the outcome of a test donein the last 14 days. In this case, 106,471respondents reported apositive test result, and 156,212questionnaires informed a negativetest outcome with a TPR of 40.53%. In summary, the number oftested symptomatic, the number of positive cases, and the numberof negative results for the remaining countries in 2020 and 2021are displayed in Table 1. Additionally, Table 1 shows informationabout other individual features such as gender and age groups.Table 2 shows the ensemble averages with the corresponding 95%confidence intervals (CI) of the F1score yielded by the various detec-tion methods for the different countries and for 2020. In particular,the methods the best F1scores for each country are: Brazil ( Astley :73.72%), Canada ( Menni_1 :54.33%), Israel ( Bhattacharya :62.78%),Japan ( Menni_1 :46.33%), Turkey ( Bhattacharya :67.67%), andSouth Africa ( Roland :67.32%). The F1score in %and the CIsobtained for 2021 are displayed in Table 3. For 2021, the best F1scores are: Brazil ( Menni_2 :66.54%), Canada ( Smith :50.28%), Is-rael ( Bhattacharya :58.76%), Japan ( Mika :52.41%), Turkey ( Bha-ttacharya :64.61%), and South Africa ( Menni_2 :66.50%). As ob-served in Tables 2 and 3, none of the methods achieved an F1scoreof74%or above, indicating that no model is very good. According toTable 1, Brazil, Turkey, and South Africa exhibit TPR values at leasttwofold higher than those obtained from Canada, Israel, and Japan.Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)Table 1: Characteristics of the study population for the various countries and for two non-overlapped periods (2020 and 2021).CharacteristicBrazil Canada Israel Japan Turkey South Africa2020 2021 2020 2021 2020 2021 2020 2021 2020 2021 2020 20211. Tested symptomatic, N 83238 262683 8927 33997 5944 19063 4698 41010 15952 28896 7883 230382. Test outcome(a) Positive, N 44963 106471 838 3433 1238 2869 532 4011 6167 9228 2866 8459(b) Negative, N 38275 156212 8089 30564 4706 16194 4166 36999 9785 19668 5017 14579(c) TPR, % 54.02 40.53 9.39 10.10 20.83 15.05 11.32 9.78 38.66 31.94 36.35 36.713. Gender(a) Female, N 45357 130235 5438 19472 2941 9290 1679 14283 3939 7185 3923 11291(b) Male, N 24928 76689 2315 9824 2199 6746 2388 20791 8920 15292 2525 67304. Age groups(a) 18-24, N 8270 27474 1136 3248 583 1498 179 871 1716 2267 739 1580(b) 25-34, N 19596 56227 2337 7172 1144 3069 577 3797 4375 5756 2252 4889(c) 35-44, N 21061 57452 1750 6688 1041 3333 997 7527 4043 7110 1801 4721(d) 45-54, N 13776 39122 1210 5215 933 3115 1216 10413 2071 4594 1141 3878(e) 55-64, N 6968 22190 954 4478 880 2634 828 8724 862 2400 491 2124(f) 65-74, N 140 6016 308 2421 510 1957 479 3529 158 719 1667 799(g) 75+, N 233 1025 126 825 143 627 66 846 21 134 27 230Table 2: F1score and its 95%confidence interval for the selected countries for 2020, in %.Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 65.56 (65.48 - 65.64) 54.33 (53.66 - 54.99) 59.76 (59.16 - 60.36) 46.33 (45.33 - 47.33) 63.93 (63.68 - 64.17) 61.39 (61.07 - 61.70)Menni_2 71.13 (71.01 - 71.24) 49.33(48.77 - 49.88) 57.50 (57.04 - 57.97) 39.91 (39.27 - 40.54) 67.41 (67.21 - 67.60) 66.36 (66.10 - 66.62)Roland 69.38 (69.30 - 69.46) 51.44 (50.86 - 52.02) 61.93 (61.46 - 62.41) 40.68 (39.98 - 41.39) 67.06 (66.87 - 67.26) 67.32 (67.05 - 67.58)Smith 71.11 (71.05 - 71.18) 53.43 (52.85 - 54.01) 62.47 (61.98 - 62.97) 45.12 (44.42 - 45.82) 67.30 (67.11 - 67.49) 62.06 (61.80 - 62.32)Zoabi_55 70.71 (70.65 - 70.77) 32.96 (32.37 - 33.54) 47.76 (47.32 - 48.20) 29.95 (29.29 - 30.60) 57.86 (57.69 - 58.03) 59.05 (58.80 - 59.31)Zoabi_65 70.73 (70.67 - 70.79) 32.86 (32.28 - 33.44) 47.79 (47.36 - 48.23) 29.91 (29.27 - 30.55) 57.72 (57.55 - 57.88) 59.00 (58.74 - 59.25)CDC 73.42 (73.36 - 73.48) 23.43 (23.14 - 23.72) 45.84 (45.46 - 46.21) 27.38 (27.00 - 27.75) 62.60 (62.42 - 62.78) 62.13 (61.88 - 62.39)Shoer 70.45 (70.39 - 70.52) 50.95 (50.37 - 51.54) 62.41 (61.93 - 62.89) 44.57 (43.86 - 45.28) 67.49 (67.30 - 67.69) 66.76 (66.52 - 67.00)Bhattacharya 69.77 (69.70 - 69.83) 51.90 (51.31 - 52.50) 62.78 (62.30 - 63.26) 39.41 (38.84 - 39.97) 67.67 (67.48 - 67.87) 66.81 (66.52 - 67.10)WHO 23.92 (23.83 - 24.01) 24.08 (23.45 - 24.70) 24.69 (24.15 - 25.24) 27.29 (26.52 - 28.06) 25.14 (24.90 - 25.38) 30.97 (30.59 - 31.35)Perez 59.47 (59.39 - 59.55) 45.20 (44.56 - 45.83) 52.27 (51.71 - 52.82) 32.93 (32.23 - 33.64) 58.12 (57.89 - 58.35) 61.00 (60.70 - 61.30)Mika 69.43 (69.37 - 69.49) 51.43 (50.86 - 52.01) 62.16 (61.68 - 62.63) 45.29 (44.65 - 45.94) 67.08 (66.89 - 67.28) 66.40 (66.13 - 66.68)Akinbami_1 12.85 (12.77 - 12.94) 11.33 (10.72 - 11.93) 10.22 (9.82 - 10.62) 13.38 (12.58 - 14.18) 11.48 (11.26 - 11.70) 17.70 (17.34 - 18.07)Akinbami_2 14.69 (14.60 - 14.78) 9.41 (8.89 - 9.92) 9.59 (9.16 - 10.01) 13.16 (12.35 - 13.98) 10.81 (10.60 - 11.03) 17.14 (16.80 - 17.49)Akinbami_3 27.84 (27.73 - 27.94) 20.23 (19.66 - 20.81) 21.67 (21.14- 22.19) 18.98 (18.22 - 19.73) 26.31 (26.05 - 26.56) 28.93 (28.57 - 29.29)Salomon 30.97 (30.87 - 31.07) 25.52 (24.84 - 26.20) 27.12 (26.58 - 27.66) 30.64 (29.93 - 31.35) 28.36 (28.10 - 28.61) 39.35 (38.98 - 39.72)Astley 73.72 (73.65 - 73.78) 48.29 (47.58 - 49.00) 62.47 (61.98 - 62.97) 44.13 (43.32 - 44.93) 67.45 (67.24 - 67.65) 66.85 (66.61 - 67.09)Since the F1score is highly affected by imbalanced classes [ 7], wecomputed the averages of the F1score yielded by the detectionmethods for three groups: the broad set of the six countries, theset of countries with high TPR (Brazil, Turkey, and South Africa)and low TPR (Canada, Israel, and Japan) for 2020, 2021, and theentire interval 2020-2021 (Table 4). For 2020, when there was novaccination yet, the most efficient method was Astley (Average:60.49%). In the Astley method, the most relevant are cough, stuffyor runny nose, aches or muscle pain, headache, sore throat, andfever. In 2021, when vaccination began, Mika was the most effectivemethod (Average: 58.35%). In the Mika method, fever, cough, lossof taste and smell, and gastrointestinal problems are consideredfor COVID-19 detection. In the full article [ 13], we compared thevarious detection methods in terms of sensitivity, specificity, andprecision.4 CONCLUSIONSIn this work, we conduct a comparison of various COVID-19 diagno-sis methods based on survey information using datasets extractedfrom the global UMD-CTIS survey. More precisely, we comparethe different methods for six countries and two periods (with andwithout vaccines) using the F1score as a performance metric. Fromthese results, we highlight the techniques showing the best F1score.It is important to mention that, as can be seen in Tables 2 and 3,none of the methods achieve an F1score above 75%indicating thatno model has a superior performance.Additional results and a more extended discussion can be foundin the full version of the article [13].5 ETHICAL DECLARATIONThe Ethics Board (IRB) of IMDEA Networks Institute gave ethi-cal approval for this work on 2021/07/05. IMDEA Networks hassigned Data Use Agreements with Facebook and the Universityof Maryland (UMD) to access their data, specifically, UMD project1587016-3 entitled C-SPEC: Symptom Survey: COVID-19 entitledILI Community-Surveillance Study. The data used in this study wascollected by the University of Maryland through The University ofMaryland Social Data Science Center Global COVID-19 Trends andImpact Survey in partnership with Facebook. Informed consent hasbeen obtained from all participants in this survey by this institution.All the methods in this study have been carried out in accordancewith relevant ethics and privacy guidelines and regulations.6 AVAILABILITY OF DATA AND MATERIALSThe data presented in this paper (in aggregated form) and theprograms used to process it will be openly accessible at https://github.com/GCGImdea/coronasurveys/. The microdata of the CTISsurvey from which the aggregated data was obtained cannot beshared, as per the Data Use Agreements signed with Facebook andthe University of Maryland (UMD).7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández AntaTable 3: F1score and its 95%confidence interval for the selected countries for 2021, in %Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 59.24 (59.18 - 59.31) 49.38 (49.02- 49.74) 57.31 (56.96 - 57.65) 49.24 (49.16 - 49.83) 59.65 (59.44 - 59.87) 58.28 (58.06 - 58.50)Menni_2 66.54 (66.49 - 66.59) 39.82 (39.59 - 40.05) 53.46 (53.21 - 53.70) 42.60 (42.37 - 42.84) 62.71 (62.56 - 62.85) 66.50 (66.33 - 66.68)Roland 65.76 (65.71 - 65.82) 46.28 (46.03 - 46.53) 57.16 (56.86 - 57.46) 42.82 (42.62 - 43.03) 64.13 (63.96 - 64.31) 64.41 (64.23 - 64.59)Smith 63.37 (63.32 - 63.42) 50.28 (49.99 - 50.57) 58.00 (57.68 - 58.33) 51.48 (51.23 -51.74) 64.38 (64.21 - 64.55) 61.62 (61.45 - 61.80)Zoabi_55 59.83 (59.79 - 59.88) 37.31 (37.01 - 37.60) 39.63 (39.28 - 39.98) 33.71 (33.45 - 33.98) 52.14 (51.88 - 52.40) 59.62 (59.47 - 59.77)Zoabi_65 59.78 (59.74 - 59.83) 37.10 (36.81 - 37.39) 39.64 (39.29 - 39.99) 33.36 (33.11 - 33.62) 52.06 (51.80 - 52.31) 59.54 (59.38 - 59.69)CDC 63.22 (63.17 - 63.26) 27.41 (27.28 - 27.55) 38.78 (38.59 - 38.97) 28.54 (28.40 - 28.68) 55.96 (55.81 - 56.11) 61.25 (61.10 - 61.39)Shoer 65.81 (65.76 - 65.87) 41.10 (40.84 - 41.36) 53.67 (53.37 - 53.97) 45.42 (45.07 - 45.78) 64.18 (64.01 - 64.35) 64.97 (64.80 - 65.15)Bhattacharya 64.16 (64.11 - 64.22) 49.22 (48.96 - 49.49) 58.76 (58.48 - 59.03) 45.82 (45.59 - 46.05) 64.61 (64.44 - 64.78) 63.40 (63.22 - 63.59)WHO 23.62 (23.56 - 23.68) 26.01 (25.66 - 26.35) 27.92 (27.59 - 28.24) 34.05 (33.74 - 34.37) 27.72 (27.49 - 27.94) 32.78 (32.58 - 32.98)Perez 54.85 (54.79 - 54.90) 44.70 (44.40 - 45.00) 51.27 (50.93 - 51.61) 39.72 (39.45 - 40.00) 56.03 (55.86 - 56.21) 59.17 (58.98 - 59.35)Mika 65.33 (65.28 - 65.38) 46.76 (46.40 - 47.12) 57.50 (57.22 - 57.79) 52.41 (51.73 - 53.09) 64.13 (63.96 - 64.31) 63.98 (63.81 - 64.15)Akinbami_1 12.02 (11.96 - 12.07) 11.43 (11.17 - 11.70) 10.60 (10.33 - 10.88) 11.11 (10.82 - 11.39) 13.86 (13.69 - 14.03) 15.86 (15.66 - 16.06)Akinbami_2 12.02 (12.05 - 12.16) 8.03 (7.79 - 8.27) 11.48 (11.20 - 11.75) 9.10 (8.83 - 9.31) 11.80 (11.64 - 11.96) 13.61 (13.44 - 13.79)Akinbami_3 26.59 (26.00 - 26.11) 20.96 (20.64 - 21.27) 21.96 21.62 - 22.30) 19.90 (19.63 - 20.17) 26.35 (26.12 - 26.58) 28.08 (27.85 - 28.31)Salomon 30.15 (30.11 - 30.24) 28.06 (27.70 - 28.43) 30.72 (30.39 - 31.05) 37.27 (36.97 - 37.57) 31.31 (31.09 - 31.53) 38.03 (37.83 - 38.23)Astley 65.95 (65.90 - 66.01) 45.07 (44.74 - 45.40) 58.62 (58.29 - 58.94) 50.39 (50.08 - 50.70) 63.67 (63.50 - 63.85) 64.06 (63.88 - 64.24)Table 4: Average F1score (in %) for three country groups: theoverall six countries (overall), the countries with high TPR(High TPR: Brazil, Turkey, and South Africa), and the coun-tries with low TPR (Low TPR: Canada, Israel, and Japan) for2020, 2021, 2020-2021.2020 2021 2020-2021Method OverallLow TPROverallLow HighOverallLow HighTPR TPR TPR TPR TPR TPRMenni_1 58.55 53.47 63.63 55.52 51.98 59.06 57.03 52.73 61.34Menni_2 58.61 48.91 68.30 55.27 45.29 65.25 56.94 47.10 66.78Roland 59.64 51.35 67.92 56.76 48.75 64.77 58.20 50.05 66.34Smith 60.25 53.67 66.82 58.19 53.25 63.12 59.22 53.46 64.97Zoabi_55 49.72 36.89 62.54 47.04 36.88 57.20 48.38 36.89 59.87Zoabi_65 49.67 36.85 62.48 46.91 36.70 57.13 48.29 36.78 59.81CDC 49.13 32.22 66.05 45.86 31.58 60.14 47.50 31.90 63.10Shoer 60.44 52.64 68.23 55.86 46.73 64.99 58.15 49.69 66.61Bhattacharya 59.72 51.36 68.08 57.66 51.27 64.06 58.69 51.32 66.07WHO 26.02 25.35 26.68 28.68 29.33 28.04 27.35 27.34 27.36Perez 51.50 43.47 59.53 50.96 45.23 56.68 51.23 44.35 58.11Mika 60.30 52.96 67.64 58.35 52.22 64.48 59.33 52.59 66.06Akinbami_1 12.83 11.64 14.01 12.48 11.05 13.91 12.65 11.35 13.96Akinbami_2 12.47 10.72 14.21 11.02 9.54 12.51 11.75 10.13 13.36Akinbami_3 23.99 20.29 27.69 23.97 20.94 27.01 23.98 20.62 27.35Salomon 30.33 27.76 32.89 32.59 32.02 33.16 31.46 29.89 33.03Astley 60.49 51.63 69.34 57.96 51.36 64.56 59.22 51.50 66.95Research Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.
QaeeF0OdX2Y
This paper uses the UMD-CTIS dataset to evaluate the existing symptom-based detection methods.
3: Marginally above acceptance threshold
This paper uses the UMD-CTIS dataset to evaluate the existing symptom-based detection methods. Goodness: 1. The study covers many detection methods (10+) from three categories (rule-based methods, logistic regression-based methods, and tree-based methods), which gives an overview of the existing symptom-based detection methods. 2. The evaluation in both the 2020 period and 2021 period allows us to explore the influence of vaccines in symptom detection, which is especially useful since vaccines may make COVID-infected patients show fewer symptoms, which influences the detection method performance. Weakness: 1. The result section only includes the table explanation and lists the performance of different methods, while a more detailed explanation of why some methods are better and the takeaways are missing. 2. The evaluation metric now is only the F1 score. More metrics are useful to better evaluate the difference between each method. Besides, in such detection problems, a high recall is usually more important than precision. More discussions can focus on this point.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
PhAOtEHLo1
KDD.org/2023/Workshop/epiDAMIK
2023
Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)
["Jes\u00fas Rufino", "Juan Marcos Ramirez", "Jos\u00e9 Aguilar", "Carlos Baquero", "Jaya Champati", "Davide Frey", "Rosa Elvira Lillo", "Antonio Fernandez Anta"]
During the global pandemic crisis, several COVID-19 diagnosis methods based on survey information have been proposed with the purpose of providing medical staff with quick detection tools that allow them to efficiently plan the limited healthcare resources. In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reported symptoms. In addition, these methods have been evaluated using datasets extracted from different studies with different characteristics. On the other hand, the University of Maryland, in partnership with Facebook, launched the Global COVID-19 Trends and Impact Survey (UMD-CTIS), the largest health surveillance tool to date that has collected information from 114 countries/territories from April 2020 to June 2022. This survey collected information on various individual features including gender, age groups, self-reported symptoms, isolation measures, and mental health status, among others. In this paper, we compare the performance of different COVID-19 diagnosis methods using the information collected by UMD-CTIS, for the years 2020 and 2021, in six countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa. The evaluation of these methods with homogeneous data across countries and years provides a solid and consistent comparison among them.
["COVID-19 diagnosis", "F1-score", "light gradient boosting machine", "logistic regression", "rule-based methods."]
ABSTRACTDuring the global pandemic crisis, several COVID-19 diagnosismethods based on survey information have been proposed withthe purpose of providing medical staff with quick detection toolsthat allow them to efficiently plan the limited healthcare resources.In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reportedsymptoms. In addition, these methods have been evaluated usingdatasets extracted from different studies with different characteris-tics. On the other hand, the University of Maryland, in partnershipwith Facebook, launched the Global COVID-19 Trends and ImpactSurvey (UMD-CTIS), the largest health surveillance tool to datethat has collected information from 114 countries/territories fromApril 2020 to June 2022. This survey collected information on vari-ous individual features including gender, age groups, self-reportedsymptoms, isolation measures, and mental health status, amongothers. In this paper, we compare the performance of differentCOVID-19 diagnosis methods using the information collected byUMD-CTIS, for the years 2020 and 2021, in six countries: Brazil,Canada, Israel, Japan, Turkey, and South Africa. The evaluation ofthese methods with homogeneous data across countries and yearsprovides a solid and consistent comparison among them.KEYWORDSCOVID-19 diagnosis, F1-score, light gradient boosting machine,logistic regression, rule-based methods.1 INTRODUCTIONIn December 2019, the coronavirus disease 2019 (COVID-19) emergedin China caused by the severe acute respiratory syndrome coron-avirus 2 (SARS-CoV-2) [17]. Within a few months, this disease ledto a global pandemic crisis that has challenged national health-care systems [ 6]. More precisely, by June 2023, the cumulativenumber of confirmed cases worldwide exceeded 688 million, andofficially over 6,800,000 people have died from COVID-19; https://www.worldometers.info/coronavirus/. In this context, the plan-ning of the healthcare resources (e.g., the estimation of the numberof hospital beds or intensive care units needed for COVID-19 pa-tients) has been determined by the availability of quick and efficientinstruments for the diagnosis of active cases.Thereverse transcriptase-polymerase chain reaction (RT-PCR) testhas been considered the standard tool to detect infected people [ 5].However, real-time disease monitoring based on the RT-PCR test de-mands material and human resources that are not always available.To overcome these limitations, various diagnosis methods based onsurvey information have been proposed that combine multiple indi-vidual features (age, gender, symptoms, demographic data, etc.) tocharacterize COVID-19-infected people [ 1–4,9–12,14–16,18,19].Specifically, most of these methods propose simple rules or buildmachine learning models that evaluate a set of individual attributesto determine a COVID-19-positive case. However, a consistent com-parison framework that evaluates the performance yielded by thedifferent methods is missing since the generated models and thecorresponding conclusions are assessed using different datasetsthat are heterogeneous in size and type.On the other hand, in April 2020, the University of MarylandGlobal COVID-19 Trends and Impact Survey (UMD-CTIS), in part-nership with Facebook, launched the largest global health surveil-lance platform to date [ 8]. More precisely, this project stored theresponses provided by a subset of Facebook invited users aboutdifferent topics related to the COVID-19 pandemic such as the pres-ence of symptoms, RT-PCR outcomes, and vaccination acceptance,among others. This data collection instrument was available in 56languages and it recorded tens of millions of responses from 114countries or territories worldwide.In this paper, we conduct a consistent comparison of differentmethods that detect COVID-19-positive cases from a combinationof features collected from surveys. To this end, we take into accountthe information included in the UMD-CTIS records extracted fromsix countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa.For each country, the models are trained using a randomly selectedsubset of tested individuals who reported at least one symptom.Furthermore, we compare the performance for two years: 2020and 2021, which represent two different periods of the pandemicwithout and with vaccination, respectively. We compare the de-tection methods using four performance metrics: F1-score, sensi-tivity, specificity, and precision (only F1-score is presented in thisextended abstract). Overall, the detection methods exhibiting thebest performances across different groups and metrics are Mika[10] (F1-score: 59.33%),Astley [3] (F1-score: 59.22%),Smith [16](F1-score: 59.22%),Bhattacharya [4] (F1-score: 58.69%),Roland[12] (F1-score: 58.20%),Shoer [15] (F1-score: 58.15%),Menni_1 [9](F1-score: 57.03%), and Menni_2 [9] (F1-score: 56.94%).2 MATERIALS AND METHODS2.1 UMD-CTIS SurveyWe perform a consistent comparative study of various COVID-19active case detection methods from data provided by the UMD-CTISsurvey. More precisely, since April 23, 2020, Facebook worldwideusers were invited to participate in the UMD-CTIS survey. Userswho accepted the invitation were moved to a web survey platform,where potential participants must report age > 18 and consent ofdata use before responding to the survey. The survey instrumentconsists of a web-based questionnaire collecting information onJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández Antagender, age groups, symptoms, COVID testing, isolation, and vac-cination, among others. Furthermore, the survey instrument wascontinuously updated to aggregate new items. Finally, UMD orga-nized and stored daily microdata that were further processed todevelop our comparative study.2.2 Comparative study designIn this work, we compare the performance of various COVID-19 de-tection methods using the information provided by UMD-CTIS dataextracted from six countries: Brazil, Canada, Israel, Japan, Turkey,and South Africa. These countries are selected based on geographi-cal diversity and the large amount of available data. In addition, thiscomparative study is performed for two non-overlapped periods:(2020) from April 23 to December 31, 2020, and (2021) from January1 to December 31, 2021. Notice that the end of 2020 matches thestart of the first COVID-19 vaccination campaigns. Therefore, wecan compare the performance of the detection methods withoutand with information on vaccination. Table 1 summarizes the char-acteristics of the study population for the various countries and forthe two periods under test.For every country and period, we build a dataset by picking theanswers reporting lab test results in the last 14 days (the surveydoes not collect the test type) and at least one potential COVID-19symptom, i.e., this comparative study selects the tested and symp-tomatic cases. We select symptomatic cases because feature-basedpredictive methods typically aim at finding the combination ofsymptoms that detect infected people. In addition, we choose thetested individuals with the aim of obtaining the ground truth sam-ple set that allows us to evaluate the performance of the differentmethods quantitatively. Since questionnaires contain categoricaldata, we apply binary encoding (dummy coding) to each response.This leads to datasets with 201 features (attributes, columns, or vari-ables) for 2020, and the datasets have between 431 and 452 columnsfor 2021 depending on the selected country. For each dataset, thisstudy evaluates the performance of the various COVID-19 activecase detection methods. To this end, our study divided every datasetinto 100 partitions. For each trial, 80 %of the dataset rows (ques-tionnaires or samples) were randomly selected as training samples,and the remaining 20 %were used to test the various methods.2.3 Detection methods under comparisonIn this work, we compare the performance of various COVID-19diagnosis methods belonging to three categories:(1)Rule-based methods: CDC [ 1], WHO [ 18], Akimbami [ 2],Solomon [14], Perez [11].(2)Logistic regression techniques: Menni [ 9], Roland [ 12], Smith[16], Shoer [15], Bhattacharya [4], Mika [10].(3)Tree-based machine-learning models: Zoabi [ 19], Astley [ 3].In this work, we have implemented two versions of the Mennimethod and two versions of the Zoabi method. Note that UMD-CTIS data did not register whether the respondent skipped meals.Therefore, we modified the Menni method by fixing the skippedmeals variable to zero ( Menni_1 ). Furthermore, we followed theprocedure reported in [ 9] to build the logistic regression modelfrom individual features available in our dataset ( Menni_2 ). Inother words, we built a regression model that considers the features:age, gender, loss of smell and taste, cough, and fatigue. In the caseof the Zoabi method, notice that UMD-CTIS data ranges of agesdo not have a boundary at 60. The boundary is either at 55 or 65.We have created two different models, one for ages greater than55 years ( Zoabi_55 ) and the other for ages greater than 65 years(Zoabi_65 ). Further information regarding the methods under testcan be found in the corresponding references and in the full versionof the article [13].2.4 Benchmarking detection methodsFirst, we use the F1-score to quantitatively assess the performanceof the various detection methods. To this end, our procedure firstlyobtains the predictions over the test set for each trial. From the pre-dicted estimates and the ground truth data, the procedure identifiesthe number of true positives TP, false positives FP, true negativesTN, and false negatives FN. Then, the F1-score is obtained as fol-lows:F1=2TP2TP+FP+FN. (1)Tables 2 and 3 display the ensemble average and the CI of theF1-score for the five countries and for 2020 and 2021, respectively.Specifically, each value in these tables is obtained by averaging100 realizations of the corresponding experiment. Tables with thesensitivity, specificity, and precision values obtained are includedin the full version of the article [13].3 RESULTSAs can be seen in Table 1, 83,238respondents from Brazil reported atest outcome and at least one symptom in 2020. In this cohort, 44,963participants reported a positive test result, and 38,275respondentshad a negative test outcome. Table 1 also includes the test positiverate (TPR) where TPR=(100×positive)/(Tested symptomatic ).For example, the TPR for Brazil 2020 is 54.02%. On the other hand,for Brazil 2021, the dataset was extracted from 262,683participantswho reported at least one symptom and the outcome of a test donein the last 14 days. In this case, 106,471respondents reported apositive test result, and 156,212questionnaires informed a negativetest outcome with a TPR of 40.53%. In summary, the number oftested symptomatic, the number of positive cases, and the numberof negative results for the remaining countries in 2020 and 2021are displayed in Table 1. Additionally, Table 1 shows informationabout other individual features such as gender and age groups.Table 2 shows the ensemble averages with the corresponding 95%confidence intervals (CI) of the F1score yielded by the various detec-tion methods for the different countries and for 2020. In particular,the methods the best F1scores for each country are: Brazil ( Astley :73.72%), Canada ( Menni_1 :54.33%), Israel ( Bhattacharya :62.78%),Japan ( Menni_1 :46.33%), Turkey ( Bhattacharya :67.67%), andSouth Africa ( Roland :67.32%). The F1score in %and the CIsobtained for 2021 are displayed in Table 3. For 2021, the best F1scores are: Brazil ( Menni_2 :66.54%), Canada ( Smith :50.28%), Is-rael ( Bhattacharya :58.76%), Japan ( Mika :52.41%), Turkey ( Bha-ttacharya :64.61%), and South Africa ( Menni_2 :66.50%). As ob-served in Tables 2 and 3, none of the methods achieved an F1scoreof74%or above, indicating that no model is very good. According toTable 1, Brazil, Turkey, and South Africa exhibit TPR values at leasttwofold higher than those obtained from Canada, Israel, and Japan.Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)Table 1: Characteristics of the study population for the various countries and for two non-overlapped periods (2020 and 2021).CharacteristicBrazil Canada Israel Japan Turkey South Africa2020 2021 2020 2021 2020 2021 2020 2021 2020 2021 2020 20211. Tested symptomatic, N 83238 262683 8927 33997 5944 19063 4698 41010 15952 28896 7883 230382. Test outcome(a) Positive, N 44963 106471 838 3433 1238 2869 532 4011 6167 9228 2866 8459(b) Negative, N 38275 156212 8089 30564 4706 16194 4166 36999 9785 19668 5017 14579(c) TPR, % 54.02 40.53 9.39 10.10 20.83 15.05 11.32 9.78 38.66 31.94 36.35 36.713. Gender(a) Female, N 45357 130235 5438 19472 2941 9290 1679 14283 3939 7185 3923 11291(b) Male, N 24928 76689 2315 9824 2199 6746 2388 20791 8920 15292 2525 67304. Age groups(a) 18-24, N 8270 27474 1136 3248 583 1498 179 871 1716 2267 739 1580(b) 25-34, N 19596 56227 2337 7172 1144 3069 577 3797 4375 5756 2252 4889(c) 35-44, N 21061 57452 1750 6688 1041 3333 997 7527 4043 7110 1801 4721(d) 45-54, N 13776 39122 1210 5215 933 3115 1216 10413 2071 4594 1141 3878(e) 55-64, N 6968 22190 954 4478 880 2634 828 8724 862 2400 491 2124(f) 65-74, N 140 6016 308 2421 510 1957 479 3529 158 719 1667 799(g) 75+, N 233 1025 126 825 143 627 66 846 21 134 27 230Table 2: F1score and its 95%confidence interval for the selected countries for 2020, in %.Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 65.56 (65.48 - 65.64) 54.33 (53.66 - 54.99) 59.76 (59.16 - 60.36) 46.33 (45.33 - 47.33) 63.93 (63.68 - 64.17) 61.39 (61.07 - 61.70)Menni_2 71.13 (71.01 - 71.24) 49.33(48.77 - 49.88) 57.50 (57.04 - 57.97) 39.91 (39.27 - 40.54) 67.41 (67.21 - 67.60) 66.36 (66.10 - 66.62)Roland 69.38 (69.30 - 69.46) 51.44 (50.86 - 52.02) 61.93 (61.46 - 62.41) 40.68 (39.98 - 41.39) 67.06 (66.87 - 67.26) 67.32 (67.05 - 67.58)Smith 71.11 (71.05 - 71.18) 53.43 (52.85 - 54.01) 62.47 (61.98 - 62.97) 45.12 (44.42 - 45.82) 67.30 (67.11 - 67.49) 62.06 (61.80 - 62.32)Zoabi_55 70.71 (70.65 - 70.77) 32.96 (32.37 - 33.54) 47.76 (47.32 - 48.20) 29.95 (29.29 - 30.60) 57.86 (57.69 - 58.03) 59.05 (58.80 - 59.31)Zoabi_65 70.73 (70.67 - 70.79) 32.86 (32.28 - 33.44) 47.79 (47.36 - 48.23) 29.91 (29.27 - 30.55) 57.72 (57.55 - 57.88) 59.00 (58.74 - 59.25)CDC 73.42 (73.36 - 73.48) 23.43 (23.14 - 23.72) 45.84 (45.46 - 46.21) 27.38 (27.00 - 27.75) 62.60 (62.42 - 62.78) 62.13 (61.88 - 62.39)Shoer 70.45 (70.39 - 70.52) 50.95 (50.37 - 51.54) 62.41 (61.93 - 62.89) 44.57 (43.86 - 45.28) 67.49 (67.30 - 67.69) 66.76 (66.52 - 67.00)Bhattacharya 69.77 (69.70 - 69.83) 51.90 (51.31 - 52.50) 62.78 (62.30 - 63.26) 39.41 (38.84 - 39.97) 67.67 (67.48 - 67.87) 66.81 (66.52 - 67.10)WHO 23.92 (23.83 - 24.01) 24.08 (23.45 - 24.70) 24.69 (24.15 - 25.24) 27.29 (26.52 - 28.06) 25.14 (24.90 - 25.38) 30.97 (30.59 - 31.35)Perez 59.47 (59.39 - 59.55) 45.20 (44.56 - 45.83) 52.27 (51.71 - 52.82) 32.93 (32.23 - 33.64) 58.12 (57.89 - 58.35) 61.00 (60.70 - 61.30)Mika 69.43 (69.37 - 69.49) 51.43 (50.86 - 52.01) 62.16 (61.68 - 62.63) 45.29 (44.65 - 45.94) 67.08 (66.89 - 67.28) 66.40 (66.13 - 66.68)Akinbami_1 12.85 (12.77 - 12.94) 11.33 (10.72 - 11.93) 10.22 (9.82 - 10.62) 13.38 (12.58 - 14.18) 11.48 (11.26 - 11.70) 17.70 (17.34 - 18.07)Akinbami_2 14.69 (14.60 - 14.78) 9.41 (8.89 - 9.92) 9.59 (9.16 - 10.01) 13.16 (12.35 - 13.98) 10.81 (10.60 - 11.03) 17.14 (16.80 - 17.49)Akinbami_3 27.84 (27.73 - 27.94) 20.23 (19.66 - 20.81) 21.67 (21.14- 22.19) 18.98 (18.22 - 19.73) 26.31 (26.05 - 26.56) 28.93 (28.57 - 29.29)Salomon 30.97 (30.87 - 31.07) 25.52 (24.84 - 26.20) 27.12 (26.58 - 27.66) 30.64 (29.93 - 31.35) 28.36 (28.10 - 28.61) 39.35 (38.98 - 39.72)Astley 73.72 (73.65 - 73.78) 48.29 (47.58 - 49.00) 62.47 (61.98 - 62.97) 44.13 (43.32 - 44.93) 67.45 (67.24 - 67.65) 66.85 (66.61 - 67.09)Since the F1score is highly affected by imbalanced classes [ 7], wecomputed the averages of the F1score yielded by the detectionmethods for three groups: the broad set of the six countries, theset of countries with high TPR (Brazil, Turkey, and South Africa)and low TPR (Canada, Israel, and Japan) for 2020, 2021, and theentire interval 2020-2021 (Table 4). For 2020, when there was novaccination yet, the most efficient method was Astley (Average:60.49%). In the Astley method, the most relevant are cough, stuffyor runny nose, aches or muscle pain, headache, sore throat, andfever. In 2021, when vaccination began, Mika was the most effectivemethod (Average: 58.35%). In the Mika method, fever, cough, lossof taste and smell, and gastrointestinal problems are consideredfor COVID-19 detection. In the full article [ 13], we compared thevarious detection methods in terms of sensitivity, specificity, andprecision.4 CONCLUSIONSIn this work, we conduct a comparison of various COVID-19 diagno-sis methods based on survey information using datasets extractedfrom the global UMD-CTIS survey. More precisely, we comparethe different methods for six countries and two periods (with andwithout vaccines) using the F1score as a performance metric. Fromthese results, we highlight the techniques showing the best F1score.It is important to mention that, as can be seen in Tables 2 and 3,none of the methods achieve an F1score above 75%indicating thatno model has a superior performance.Additional results and a more extended discussion can be foundin the full version of the article [13].5 ETHICAL DECLARATIONThe Ethics Board (IRB) of IMDEA Networks Institute gave ethi-cal approval for this work on 2021/07/05. IMDEA Networks hassigned Data Use Agreements with Facebook and the Universityof Maryland (UMD) to access their data, specifically, UMD project1587016-3 entitled C-SPEC: Symptom Survey: COVID-19 entitledILI Community-Surveillance Study. The data used in this study wascollected by the University of Maryland through The University ofMaryland Social Data Science Center Global COVID-19 Trends andImpact Survey in partnership with Facebook. Informed consent hasbeen obtained from all participants in this survey by this institution.All the methods in this study have been carried out in accordancewith relevant ethics and privacy guidelines and regulations.6 AVAILABILITY OF DATA AND MATERIALSThe data presented in this paper (in aggregated form) and theprograms used to process it will be openly accessible at https://github.com/GCGImdea/coronasurveys/. The microdata of the CTISsurvey from which the aggregated data was obtained cannot beshared, as per the Data Use Agreements signed with Facebook andthe University of Maryland (UMD).7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández AntaTable 3: F1score and its 95%confidence interval for the selected countries for 2021, in %Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 59.24 (59.18 - 59.31) 49.38 (49.02- 49.74) 57.31 (56.96 - 57.65) 49.24 (49.16 - 49.83) 59.65 (59.44 - 59.87) 58.28 (58.06 - 58.50)Menni_2 66.54 (66.49 - 66.59) 39.82 (39.59 - 40.05) 53.46 (53.21 - 53.70) 42.60 (42.37 - 42.84) 62.71 (62.56 - 62.85) 66.50 (66.33 - 66.68)Roland 65.76 (65.71 - 65.82) 46.28 (46.03 - 46.53) 57.16 (56.86 - 57.46) 42.82 (42.62 - 43.03) 64.13 (63.96 - 64.31) 64.41 (64.23 - 64.59)Smith 63.37 (63.32 - 63.42) 50.28 (49.99 - 50.57) 58.00 (57.68 - 58.33) 51.48 (51.23 -51.74) 64.38 (64.21 - 64.55) 61.62 (61.45 - 61.80)Zoabi_55 59.83 (59.79 - 59.88) 37.31 (37.01 - 37.60) 39.63 (39.28 - 39.98) 33.71 (33.45 - 33.98) 52.14 (51.88 - 52.40) 59.62 (59.47 - 59.77)Zoabi_65 59.78 (59.74 - 59.83) 37.10 (36.81 - 37.39) 39.64 (39.29 - 39.99) 33.36 (33.11 - 33.62) 52.06 (51.80 - 52.31) 59.54 (59.38 - 59.69)CDC 63.22 (63.17 - 63.26) 27.41 (27.28 - 27.55) 38.78 (38.59 - 38.97) 28.54 (28.40 - 28.68) 55.96 (55.81 - 56.11) 61.25 (61.10 - 61.39)Shoer 65.81 (65.76 - 65.87) 41.10 (40.84 - 41.36) 53.67 (53.37 - 53.97) 45.42 (45.07 - 45.78) 64.18 (64.01 - 64.35) 64.97 (64.80 - 65.15)Bhattacharya 64.16 (64.11 - 64.22) 49.22 (48.96 - 49.49) 58.76 (58.48 - 59.03) 45.82 (45.59 - 46.05) 64.61 (64.44 - 64.78) 63.40 (63.22 - 63.59)WHO 23.62 (23.56 - 23.68) 26.01 (25.66 - 26.35) 27.92 (27.59 - 28.24) 34.05 (33.74 - 34.37) 27.72 (27.49 - 27.94) 32.78 (32.58 - 32.98)Perez 54.85 (54.79 - 54.90) 44.70 (44.40 - 45.00) 51.27 (50.93 - 51.61) 39.72 (39.45 - 40.00) 56.03 (55.86 - 56.21) 59.17 (58.98 - 59.35)Mika 65.33 (65.28 - 65.38) 46.76 (46.40 - 47.12) 57.50 (57.22 - 57.79) 52.41 (51.73 - 53.09) 64.13 (63.96 - 64.31) 63.98 (63.81 - 64.15)Akinbami_1 12.02 (11.96 - 12.07) 11.43 (11.17 - 11.70) 10.60 (10.33 - 10.88) 11.11 (10.82 - 11.39) 13.86 (13.69 - 14.03) 15.86 (15.66 - 16.06)Akinbami_2 12.02 (12.05 - 12.16) 8.03 (7.79 - 8.27) 11.48 (11.20 - 11.75) 9.10 (8.83 - 9.31) 11.80 (11.64 - 11.96) 13.61 (13.44 - 13.79)Akinbami_3 26.59 (26.00 - 26.11) 20.96 (20.64 - 21.27) 21.96 21.62 - 22.30) 19.90 (19.63 - 20.17) 26.35 (26.12 - 26.58) 28.08 (27.85 - 28.31)Salomon 30.15 (30.11 - 30.24) 28.06 (27.70 - 28.43) 30.72 (30.39 - 31.05) 37.27 (36.97 - 37.57) 31.31 (31.09 - 31.53) 38.03 (37.83 - 38.23)Astley 65.95 (65.90 - 66.01) 45.07 (44.74 - 45.40) 58.62 (58.29 - 58.94) 50.39 (50.08 - 50.70) 63.67 (63.50 - 63.85) 64.06 (63.88 - 64.24)Table 4: Average F1score (in %) for three country groups: theoverall six countries (overall), the countries with high TPR(High TPR: Brazil, Turkey, and South Africa), and the coun-tries with low TPR (Low TPR: Canada, Israel, and Japan) for2020, 2021, 2020-2021.2020 2021 2020-2021Method OverallLow TPROverallLow HighOverallLow HighTPR TPR TPR TPR TPR TPRMenni_1 58.55 53.47 63.63 55.52 51.98 59.06 57.03 52.73 61.34Menni_2 58.61 48.91 68.30 55.27 45.29 65.25 56.94 47.10 66.78Roland 59.64 51.35 67.92 56.76 48.75 64.77 58.20 50.05 66.34Smith 60.25 53.67 66.82 58.19 53.25 63.12 59.22 53.46 64.97Zoabi_55 49.72 36.89 62.54 47.04 36.88 57.20 48.38 36.89 59.87Zoabi_65 49.67 36.85 62.48 46.91 36.70 57.13 48.29 36.78 59.81CDC 49.13 32.22 66.05 45.86 31.58 60.14 47.50 31.90 63.10Shoer 60.44 52.64 68.23 55.86 46.73 64.99 58.15 49.69 66.61Bhattacharya 59.72 51.36 68.08 57.66 51.27 64.06 58.69 51.32 66.07WHO 26.02 25.35 26.68 28.68 29.33 28.04 27.35 27.34 27.36Perez 51.50 43.47 59.53 50.96 45.23 56.68 51.23 44.35 58.11Mika 60.30 52.96 67.64 58.35 52.22 64.48 59.33 52.59 66.06Akinbami_1 12.83 11.64 14.01 12.48 11.05 13.91 12.65 11.35 13.96Akinbami_2 12.47 10.72 14.21 11.02 9.54 12.51 11.75 10.13 13.36Akinbami_3 23.99 20.29 27.69 23.97 20.94 27.01 23.98 20.62 27.35Salomon 30.33 27.76 32.89 32.59 32.02 33.16 31.46 29.89 33.03Astley 60.49 51.63 69.34 57.96 51.36 64.56 59.22 51.50 66.95Research Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.
tpCKdIce6f
Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection- Review
4: Good paper, accept
In this work, the authors perform a consistent comparison of the different COVID-19 active case detection methods from the dataset constructed from the UMD-CTIS survey. The authors primarily implemented 3 broad types of detections, namely rule-based methods, logistic regression techniques and tree-based machine learning methods. F-1 score is used as the evaluation metric and the experiments were performed on the data from Brazil, Canada, Israel, Japan, Turkey and South Africa for the years 2020 and 2021. Some of the comments for this work are as follows: + The work divides the data only in terms of years (2020 & 2021). However, a more important experiement would be to evaluate the models on the following scenarios: 1. Beginning of COVID where information about the disease was not well known vs the time when we got a lot of info about COVID. 2. Vaccines available vs not available. 3. Different COVID variants (alpha, beta, etc). 4. Model performance based on different mobility restrictions taking place (lockdowns, restricted international travel, etc) + AUC-ROC can also be considered to be an important evaluation metric to compare the performance of the different models.
3: The reviewer is fairly confident that the evaluation is correct
u9zVZTg_Ky
KDD.org/2023/Workshop/epiDAMIK
2023
Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics
["Xiao Ning", "Yongyue Wei", "Feng Chen"]
Modelling and predicting the behaviour of infectious diseases is essential for early warning and evaluating the most effective interventions to prevent significant harm. Compartmental models produce a system of ordinary differential equations (ODEs) that are renowned for simulating the transmission dynamics of infectious diseases. However, the parameters in compartmental models are often unknown, and they can even change over time in the real world, making them difficult to determine. This paper proposes an advanced artificial intelligence approach based on physics-informed neural networks (PINNs) to estimate time-varying parameters from given data for the compartmental model. Our proposed PINNs approach captures the complex dynamics of COVID-19 by integrating a modified Susceptible-Exposed-Infectious-Recovered-Death (SEIRD) compartmental model with deep neural networks. The experimental findings on synthesized data have demonstrated that our method robustly and accurately learns the dynamics and forecasts future states. Moreover, as more data becomes available, our proposed PINNs approach can be successfully extended to other regions and infectious diseases.
["Compartmental models", "COVID-19 transmission", "Physics-informed neural networks", "Forward-inverse problem"]
ABSTRACTModelling and predicting the behaviour of infectious diseases isessential for early warning and evaluating the most effective in-terventions to prevent significant harm. Compartmental modelsproduce a system of ordinary differential equations (ODEs) that arerenowned for simulating the transmission dynamics of infectiousdiseases. However, the parameters in compartmental models areoften unknown, and they can even change over time in the realworld, making them difficult to determine. This paper proposes anadvanced artificial intelligence approach based on physics-informedneural networks (PINNs) to estimate time-varying parameters fromgiven data for the compartmental model. Our proposed PINNsapproach captures the complex dynamics of COVID-19 by integrat-ing a modified Susceptible-Exposed-Infectious-Recovered-Death(SEIRD) compartmental model with deep neural networks. Theexperimental findings on synthesized data have demonstrated thatour method robustly and accurately learns the dynamics and fore-casts future states. Moreover, as more data becomes available, ourproposed PINNs approach can be successfully extended to otherregions and infectious diseases.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSCompartmental models, COVID-19 transmission, Physics-informedneural networks, Forward-inverse problemACM Reference Format:Xiao Ning, Yongyue Wei, and Feng Chen. 2023. Physics-informed neuralnetworks integrating compartmental model for analyzing COVID-19 trans-mission dynamics. In Proceedings of Make sure to enter the correct conference∗corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] acronym ’XX, June 03–05, 2023, Woodstock, NY©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXtitle from your rights confirmation emai (Conference acronym ’XX). ACM,New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONThe emergence of severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) has presented an unprecedented and complex publichealth challenge, with emerging and re-emerging infectious dis-eases posing a significant threat. Compartmental models, governedby a nonlinear system of ordinary differential equations (ODEs),simulate multi-state population transitions by incorporating do-main knowledge and mathematical assumptions to characterize thetransmission dynamics of infectious diseases. These models are apowerful tool for detecting, understanding, and combating infec-tious disease outbreaks and have been widely used to evaluate theimpact of various public health interventions during the COVID-19pandemic [ 24]. However, since real-world data can be inherentlystochastic, noisy, and even inaccessible, model optimization andmethodological innovation are urgently needed to handle imperfectdata and provide early warning of major public health emergencies.Modeling and predicting the behavior of infectious diseases iscrucial for early warning and evaluating effective interventionsto mitigate damage. The first compartmental model, Susceptible-Infectious-Removed (SIR), was proposed by Kermack and McK-endrick to study the epidemics of the Black Death in London andthe plague in Mumbai [ 12]. Compartmental models allow the addi-tion of compartments or transmission parameters to explore andestimate the impact of different assumptions regarding interven-tions. These parameters, included in the compartmental model,determine the transmission progress between different disease sta-tuses and can generate essential characteristics of an epidemic [ 2].Finding the best-fit parameters from the system, given availabledata, is an inverse problem. Several numerical methods have beendeveloped to infer constant model parameters from available data.These methods convert the inverse problem into an optimizationproblem and formulate an estimator by minimizing an objectivefunction. However, since various non-pharmaceutical interventions(NPIs) are employed during the evolution of COVID-19, some modelparameters are time-varying.Identifying time-varying parameters in compartmental mod-els is a complex inverse problem, making it challenging to accu-rately model outbreak dynamics [ 1,10]. Recent advances in Physics-informed machine learning have shown promise in COVID-19 trans-mission modelling by incorporating prior knowledge into deepneural networks to enhance their accuracy and robustness [ 11]. ForConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.example, Kharazmi et al. used PINNs to identify time-dependentparameters and data-driven fractional differential operators in sev-eral epidemiological models [ 13]. Long et al. proposed a variantof PINNs to fit daily reported cases and identify time-varying pa-rameters in the susceptible-infectious-recovered-deceased modelfor the spread of COVID-19 [ 15]. Nascimento et al. introduced anapproach that combines physics-informed and data-driven kernelsto reduce the gap between predictions and observations [ 17]. Caiet al. employed fractional physics-informed neural networks torefine the classical susceptible–exposed–infected–removed (SEIR)model, infer time-dependent parameters, and identify unobserveddynamics of the fractional SEIR model [ 3]. However, most of theseapproaches only consider the transmission rate as a function oftime, while setting other parameters to fixed values. Additionally,they mainly use time-varying parameters for prediction and lack asystematic epidemiological analysis.The primary focus of this paper is to introduce a novel method forevaluating time-varying parameters in ODEs-based compartmentalmodels and to assess the impact of the NPIs based on the estimatedparameters. We constructed a SEIRD compartmental model thattakes an incubation period and the corresponding infectivity intoaccount, including both unknown time-varying and constant pa-rameters. Given many unknown parameters and limited data, wemodeled the system of ODEs as one network and the time-varyingparameters as another network to reduce the parameter of neuralnetworks. Furthermore, such structure of the PINNs approach is inline with the prior epidemiological correlations. We then tested theeffectiveness of our methodology using real-world reported data,simulation experiments showed that our proposed PINNs methodeffectively performs data-driven parameter estimation for mod-elling COVID-19 transmission. Moreover, as more data becomesavailable, it can be successfully extended to model and analyzeinfectious disease transmission dynamics in various regions andfor different infectious diseases.2 METHODOLOGY2.1 Compartmental modelCompartmental models enable the simulation of multi-state popu-lation transitions by incorporating domain knowledge and math-ematical assumptions to characterize the dynamics of infectiousdiseases. These models are generally represented as the followingnonlinear dynamical system: dU(t)dt=F(t,U(t);Ξ)U(t0)=U0(1)where U(t)∈RD(typicallyD≫1) is the state variable, t∈[t0,T]is the time range, U(t0)is the initial state, and Ξstands for theparameters of the dynamical system.The SIR compartmental model provided the simplest frameworkthat matched the reporting structure with the least underlyingassumptions. Many variations of the SIR model have been proposedto analyze the transmission of COVID-19. In this paper, we considera geographical region as isolated from other regions, and withinsuch region we divide the population ( N) of study region into fivecompartments, susceptible ( S, vulnerable to COVID-19 infection),exposed (E, latent individual or asymptomatic infective), infected(I, symptomatic infected), recovered ( R, immune to COVID-19), anddead (D, death due to COVID-19). The details of the SEIRD modelare described below: dS(t)dt=−βS(t)(εE(t))+I(t)NdE(t)dt=βS(t)(εE(t)+I(t))N−E(t)αdI(t)dt=E(t)α−γI(t)−μI(t)dR(t)dt=γI(t)dD(t)dt=μI(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(2)WhereS(t),E(t),I(t),R(t),D(t)denote the number of suscepti-ble, exposed, infectious, recovered, and deceased individuals overtime respectively, along with non-negative initial conditions S(0)=S0,E(0)=E0,I(0)=I0,R(0)=R0,D(0)=D0.β≥0representsthe transmission rate, which represents the probability of infectionper exposure when a susceptible individual ( S) has contact withan infected patient ( I) and becomes a latent exposed individual(E). A coefficient parameter εis introduced since the transmissioncapacity of exposed and onset populations may be different. εβrepresents the potential rate per exposure when a susceptible indi-vidual (S) has mutual contact with an exposed individual ( E), andtransmits it to another exposed individual ( E).αis the averageduration of incubation period, 1/αis the rate of latent individualsbecoming infectious Besides, γ≥0represents the recovery rate,μ≥0represents the death rate, and Nis the total population.The assumption that the parameters in Eqs. 2 are time-constant,which is a highly restrictive and unrealistic one for the real-worldepidemic where various interventions exist. The associated inter-ventions implemented by authorities, and/or mutations of the virus,et al. make the compartmental model require time-varying parame-ters to capture the dynamic of dynamics of COVID-19. Therefore,by considering transmission rate β, recovery rate γand death rateμas functions of time β(t),γ(t),μ(t), the re-written SEIRD modelis as follows: dS(t)dt=−β(t)S(t)(εE(t))+I(t))NdE(t)dt=β(t)S(t)(εE(t))+I(t))N−E(t)αdI(t)dt=E(t)α−γ(t)I(t)−μ(t)I(t)dR(t)dt=γ(t)I(t)dD(t)dt=μ(t)I(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(3)Among them, the five variables S(t),E(t),I(t),R(t),D(t)havethe same meanings as in Eq. 2. If we assume that the total populationNis constant, then the sum of the increase or decrease of the stateof each population is 0, namely,dS(t)dt+dI(t)dt+dR(t)dt+dD(t)dt=0.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYThe basic reproduction number R0is a constant epidemiologicalparameter that provides an estimation of the contagiousness of theinfectious disease. It also serves as a threshold parameter, whenR0>1, one infected individual can trigger an outbreak, while whenR0<1, the infection will not spread in the population. Given acompartmental model, R0can be calculated by the Next GenerationMatrix (NGM) approach [7].If the related parameters in the compartmental model are time-varying as in Eq. 3, the reproduction number R0is expected to keepchanging, as a function of time called the effective reproductionnumberRt.Rtfor the course of SEIRD model using the NGM ap-proach, which yields the following expressions in the proposedSEIRD model:Rt=ε·β(t)α+β(t)γ(t)+μ(t)(4)Rtprovides an estimation of the contagiousness of the infectiousdisease, during the course of an outbreak, where not every individ-ual is considered susceptible.2.2 Deep neural networksDeep neural networks (DNNs) have emerged as a reliable and effec-tive method for nonlinear function approximation, demonstratingremarkable capabilities in scientific computation and engineeringapplications, as evidenced by their widespread utilization. Manytypes of DNNs have been developed such as recurrent neural net-works, convolutional neural networks, and Transformer et al [ 16],and here we only consider fully-connected deep neural networks(FDNN). Neural networks can be viewed as discretizations of contin-uous dynamical systems, making them well-suited for dealing withdynamic systems. Mathematically, an FDNN defines a mapping ofthe formF:x∈Rd=⇒y=F(x)∈Rc, (5)wheredandcare the input and output dimensions, respectively.Generally, a standard neural unit of an FDNN receives an inputx∈Rdand produces an output y∈Rm,y=σ(Wx+b)withW∈Rm×dandb∈Rmbeing weight matrix and bias vector,respectively. σ(·), which is referred to as the activation function,is designed to add element-wise non-linearity to the model. AnFDNN with lhidden layers can be considered a nested compositionof sequential standard neural units. For convenience, we denotethe output of the DNN by y(x;θ)with θstanding for the set of allweights and biases. Specifically, the jthneuron inllayer can beformulated asy[l]j=n[l−1]∑︁k=1w[l]jkσ[l−1](y[l−1]k)+b[l]j, (6)wherey[l−1]krepresents the value of the kthneuron in the l−1layer,n[l−1]represents the number of neurons in the l−1layer,σ[l−1]is the activation function of the l−1layer,w[l]jkis the weightbetween the kthneuron in the l−1layer and the jthneuron in thellayer, andb[l]jis the bias of the jthneuron in the llayer.The nonlinear activation function enhances the ability of DNNto model various non-linear problems, selecting the suitable acti-vation function matters greatly for DNN applied in all domains.Particularly, the activation function has an extremely significantxInputσ...σσσ...σσ............σ...σσfn(x)...f2(x)f1(x)Hidden Layers Output LayerFigure 1: Illustration of the FDNN. A neural network consistsof an input layer (the input x), several hidden layers (com-posed of weights Wl, biasbl, and activation function σ), andan output layer.impact on the success of training PINNs. ReLU activation functionhas been widely used in many deep learning applications due toits dealing well with vanishing gradients problems [ 19]. However,for solving differential equations, the first and second derivativesof the neural networks would serve as inputs to calculate the lossfunction, which means that the activation function of the DNN inPINNs framework requires the second derivative to be satisfied asnon-zero. Definitely, many research works have demonstrated thatsigmoid function and tanh function are suited for effective PINNsframework training tasks.2.3 PINNs for SEIRD modelPhysics-informed neural networks (PINNs) approach is a data-driven approach to approximate the solution of differential equa-tions and estimate unknown parameters. The main idea of PINNs isto integrate a priori knowledge as physical laws or domain exper-tise modelled by differential equations into deep neural networks.Equations in the compartmental model possess coupling and thecoefficients are not independent of each other through the lens ofbiological and epidemics. In this context, we employ two separateDNNs with input tto represent the stats U(t)and time-varying pa-rameters, respectively. For the two unknown constant parameters(α,ε), we designed the modified tanh activation function to repre-sent them. The expression of the tanh function istanh(x)=ex−e−xex+e−x,and the range of values belong to [-1, 1]. Considering that α>0and0≤ε≤1, thus we designed the expression of εastanh(x),the expression of αas21·tanh(x),xis a random sample withuniform distribution generated from the interval [0, 3]. Meanwhile,COVID-19 transmission involves the analysis of real-world data,for which the available data size tends to be small and sparse. Sucha PINNs architecture enables a well-trained model with a limiteddata set.The PINNs framework is required to fit the data and simultane-ously satisfy the equations, whereby the loss function includes twoparts. The first part is the mismatch between the network outputand the available data, and another part is the residual of ODEs. Inthis study, we employ the approximation UN N (t;ΘU)≈U(t)toConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.represent the time-varying SEIRD equations (Eqs 3). The parame-tersΘare optimized to achieve the best fit with the observed data.Considering the available data Ujat timest1,t2,...,tnas trainingpoints (ground truth), the mean squared error (MSE) is calculatedas follows:MSEu=1NN∑︁j=1ˆUNN(tj)−U(tj)2, (7)Another component of the loss function is the residual of the sys-tems of Eqs. 1, we define the residual of equations as RNN(t)=dU(t)dt−F(UN N,t;Ξ). The residual, denoted as R(t;ΘU), serves asa metric for assessing the accuracy of the approximation UNN(t;ΘU)in satisfying the ordinary differential equations (ODEs). Evaluatingthe residual involves computing the time derivative of the neu-ral network output, which can be accomplished using automaticdifferentiation [ 20]. Automatic differentiation is a computationaltechnique that efficiently computes derivatives by applying thechain rule. It breaks down functions into elementary operationsand calculates their derivatives, allowing for accurate and efficientcomputation of the overall function’s derivative with respect to itsinput variables.MSEr=1NN∑︁j=1RNN(tj)2, (8)In summary, the loss function of proposed PINNs approach is de-fined as:L=ωuMSEu+ωrMSEr (9)The weight coefficients, ωu,ωr, in the loss function play a crucialrole in balancing the optimization process between learning fromthe data and satisfying the ODEs. These parameters allow fine-tuning of the model’s behaviour and trade-off between the twoobjectives. By adjusting the values of ωu,ωr, the emphasis can beplaced on either accurately fitting the available data or ensuringthe ODE constraints are well-satisfied.Consequently, this PINNs model strives to minimize the lossfunction, effectively learning the underlying physics encoded inthe ODEs while accurately capturing the patterns and relationshipsin the available data.3 EXPERIMENTSIn this section, we will provide a description of the collected dataand present the results obtained from parameter estimation andpredictions using the proposed PINNs approach.3.1 Data sourceFor the COVID-19 epidemic in Italy, the first official report of in-digenous case was on February 21, 2020 in Lodi province, whileseveral epidemiological-linked cases were traced back to February20, 2020. The data considered in our study is downloaded fromItalian Civil Protection (http://www.protezionecivile.gov.it/media-comunicazione/comunicati-stampa) and Ministry of Health (http://www.salute.gov.it/portale/home.html).It is comprised of commutative infected, recovered, and deceasedcases for the period from February 20, 2020 (day 1), to June 30,2020 (day 132) [ 8]. To avoid weekly fluctuations induced by thework-leisure shift and nature noise in the real-world data, a 7-dayData()fx....................................dSdtdEdtdIdtdRdtdDdt...EE0dNdtNo inflow conditionNSS,RR ID()t()t()t2 1()rN N jMSE tN 2 1() ()NNuj jMSE U t U tNuu rr LM S E M S E ( : , ) updateNN w b,txDNNsautomatic differentiation MinimizeMismatch of data and UNN Residual of ODEsODE s-based ODEs-based SEIRD modelodeFigure 2: Schematic diagram of the PINNs framework for theSEIRD compartmental model with unknown (time-varyingand constant) parameters. The green-shaded DNNs repre-sents the states UN N (t)to fit the available data and infer theunobserved dynamics. The yellow-shaded DNNs representstime-varying parameters β(t),γ(t),μ(t). The two constant pa-rameters (α,ε) are represented by the modified tanh(t)acti-vation function.moving average was used to smooth the reported data by averagingthe values of each day with those of the 7 days before. In order tocontrol the transmission of COVID-19 in Italy, lockdown and manyrestriction measures were implemented from February 23, 2020, asthe developed timeline shown in Fig. 3. All events and interventionsare available from official websites https://mn.gov/governor/covid-19/news/.Key EventsFormal start date of COVID-19: localized lockdown for certain regionsFeb 21 March 8 2022 April 1 10 May 3 18 June 15Ban parks, public gardens, and open-air recreational activityAll non-essential or non-strategic industrial activities are closedLockdown Lockdown LockdownDPCM: initial release of some restriction measuresDPCM: general opening in effect, social distancing and other measures remainFirst DPCM: localized national lockdown, ban of gathering and sports events.National lockdown, commercial activities shutdown11 262020DPCM: general opening in effect, social distancing and other measures remain23First official report caseFigure 3: Timeline of NPIs implemented in Italy to controlCOVID-19. DPCM: Decree of the Prime Minister.3.2 Experimental settingsWe train the PINNs model on a personal laptop running the Win-dows 10 operating system, equipped with an Intel (R) Core (TM)i7-8550U CPU operating at 1.8GHz. We implement the PINNs ap-proach using Python and the PyTorch framework [ 21]. For thenumerical experiment, we train the neural networks using theAdam optimizer with an initial learning rate of 2×10−3and a decayrate of 95%every 2000 epochs. The entire training process takesabout 10 minutes to run 50,000 epochs on all training data, andpredictions can be made within seconds.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NY3.3 Results3.3.1 Data fitting. In this subsection, we present the evaluation ofhow well the estimated parameters fit the SEIRD compartmentalmodel on the available data. Fig.4 shows the fitting of the dynamicof the SEIRD model to the available real-world reported data (after7-day smoothing), which demonstrates that the proposed PINNsapproach can accurately fit the different fluctuations in the data.02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(a)0100002000030000400005000060000700008000090000100000110000No. of current infectiveobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(b)020000400006000080000100000120000140000160000180000No. of recoveredobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(c)05000100001500020000250003000035000No. of deathsobservations7-day rollingPINNs fittedData fitting during trainingFigure 4: Data fitting during training. (a.) Fitting to the avail-able data of current infectious. (b.) Fitting to the availabledata of cumulative recovered. (c.) Fitting to the available dataof cumulative deaths. Dot: observed data. Line: 7-day rollingof observed data. Dashed: PINNs’ prediction of dynamics.3.3.2 Inference. We aim to infer the time-varying parameters β(t),γ(t),μ(t), as well as the constants αandε, through the inverseproblem solving of the SEIRD compartmental model. The incuba-tion period and the infectiousness during this period are parametersspecific to the virus, which can be obtained from clinical case in-formation or inferred using statistical or mathematical modellingbased on available data. In our study, we estimate the incubationperiod of COVID-19 to be approximately 5.8 days, and the infec-tiousness during the incubation period is found to be nearly equalto 99.9% of the infection period.The transmission dynamics of infectious diseases are influencedby multiple factors, such as government interventions, individualbehaviour, and medical resources. In order to accurately modelthe spread of infectious diseases using compartmental models, it isnecessary to update certain parameters over time to account for theevolving impact of interventions. These parameters include β(t),γ(t), andμ(t), which represent the time-varying rates of transmis-sion, recovery, and mortality, respectively. In Figure 5, we presentthe inference results of these time-varying parameters in Italy fromFebruary 20 to June 30, 2020. This analysis provides insights intohow the values of β(t),γ(t), andμ(t)change over the specifiedtime period, reflecting the impact of interventions and other factorson the dynamics of the disease.Note that the events that have an impact on β(t)have to do withpeople’s adaption to preventive interventions and the interactionsamong individuals, whereas μ(t)relates to the availability and ef-fectiveness of health care, as well as on the resource availability inhospitals.γ(t)is known to be a disease-specific parameter (inverseof the infectious period) but is also affected by the capacity of thehealthcare system to accommodate hospitalization. As shown inFig.5 (a), the transmission rate β(t)can fit well with what would beexpected given such events. The earliest traceable first confirmedcase of COVID-19 on February 20, 2020, the authorities of Italystarted imposing the localized lockdown for certain regions on Feb-ruary 23, 2020, these control measures achieved a certain success, asdemonstrated by a significant reduction in transmission rates β(t).As far asγ(t)andμ(t), hospitals’ ability particularly emergencyrooms had a considerable impact. In the context of COVID-19, hos-pitals are at full capacity in the first months of the outbreak, andas months went by, healthcare professionals learned more aboutpossible treatments to treat the disease’s symptoms and effects.This usually results in a decrease in the proportion of individualsthat died from the disease (decrease of μ(t)) and in a decrease inthe recovery time (an increase of γ(t)). As shown in Fig.5 (b) andFig.5 (c), in qualitative terms, was an increasing trend in γ(t)and adecreasing trend in μ(t).The effective reproduction number is a crucial parameter in theSEIRD model that helps to predict the spread of infectious diseases.Rtless than 1 indicates that the transmission of the infectiousdisease will gradually disappear. By monitoring changes in Rtovertime, public health officials can make informed decisions aboutinterventions to control the spread of the disease. Fig. 6 (a) showsthe evolution of Rt=ε·β(t)α+β(t)γ(t)+μ(t)in the proposed SEIRDcompartmental model from February 20 to June 30, 2020. In the firstseveral days of the outbreak, the effective reproduction numberRtwas greater than 8, which resulted in a substantial outbreak.On February 25, Rtgradually decreased as localized lockdown forcertain regions and the awareness of the epidemic. However, Rtwasstill greater than 1, which may be due to the partially incompletelockdown, or the movement of people from northern to southernItaly when the country-wide lockdown was announced but not yetenforced. When the national lockdown was fully operational andstrictly enforced, Rtkeeps decreasing and finally reached below 1.Moreover,Rtsteadily declined at the end of March due to a widertesting campaign that identified more mildly symptomatic infectedindividuals. Since June 15, Rtshows a growing trend due to DPCMdeclaring that general opening was in effect, social distancing, andother measures remained. Additionally, to validate the estimated Rt,a serial Bayesian model was implemented to produce the Rtof ItalyConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(a)0.00.10.20.30.40.50.6(t)transmission rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(b)0.020.040.060.080.10(t)recovery rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(c)0.000.010.020.030.040.050.060.070.08(t)Death rateTime-varying parameters (t), (t), (t)Figure 5: The time-varying transmission rate of SEIRD modelbased on PINNs approach on Italy data from February 20 toJune 30, 2020. (a): transmission rate β(t). (b): recovery rateγ(t). (c): death rate μ(t)at the same time period [ 5], as shown in Fig. 6 (b). Parameters forthe serial interval distribution in the model were set according tothe published literature (mean = 7.5 d; SD = 3.4 d) [ 18,23]. As shownin 6, theRtestimated by the proposed PINNs approach is essentiallythe same as that estimated by the Bayesian model. Besides, the resultof the proposed approach provides a more detailed and accuratecapture of the dynamics.3.3.3 Forecasting. Modeling results can provide reliable feedbackinformation for the authorities to make future decisions. The ODEs-based compartmental model requires determined initial conditionsand model parameters to make predictions. To test the performanceof the proposed PINNs approach, we performed predictions for theearly outbreak of COVID-19 in Italy at one-month, two-month, andthree-month, respectively. As the initial conditions can be obtainedfrom the training data and the model parameters are already cali-brated, we can forecast the epidemic dynamics by performing theforward process. In the prediction part, the value of β(t),γ(t)andμ(t)are assumed to be their final value of the training time window.Fig. 7 displays the one-week prediction and corresponding obser-vations for three time periods produced by using the SEIRD modelwith the estimated parameters. Note that the number of recoveredand death states in the SEIRD model are terminal states, which02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29Date (February 20, 2020 to June 30, 2020)0246810RtEffective reproduction numberFigure 6:Rtin Italy from February 24 to June 30, 2020. (a.)Rt estimated by proposed PINNs approach for SEIRD model.(b.)Rtestimated by serial Bayesian model.means that the changes in the number of recovered and death peo-ple are always non-decreasing. In turn, the infected people may seeperiods of increase and decrease due to it being a state of transition.Fig.7 (a) displays the one-week prediction based on the reporteddata from February 20 to March 20, 2020, Fig.7 (b) displays the one-week prediction based on the reported data from February 20 toApril 19, 2020, and Fig.7 (c) displays the one-week prediction basedon the reported data from February 20 to May 19, 2020. The perfectmatch between the predictions and the observations demonstratesthe parameters inferred by the learned network are very plausible,as well as the generalization ability of the model.Furthermore, to quantitatively test the prediction performanceof the proposed approach, We use three evaluation metrics to makefair and effective comparisons. They are mean absolute error (MAE),root mean square error (RMSE), and mean absolute percentage error(MAPE). The calculation method is shown in Eq. (10)(12)(11).MAE =1nn∑︁i=1|ˆyi−yi|, (10)RMSE =vt1nn∑︁i=1(ˆyi−yi)2, (11)MAPE =1nn∑︁i=1|ˆyi−yi|ˆyi∗100%, (12)Interventions to control COVID-19 keep adjusting, which mayresult in uncertainty, experimental results as represented in Table1show the highly accurate forecasting capability of the proposedapproach.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYTable 1: The forecasting performance in 3-day, 5-day and 7-day.MetricsAfter March 20, 2020 After April 19, 2020 After May 19, 20203-day 5-day 7-day 3-day 5-day 7-day 3-day 5-day 7-dayMAE(I) 5411 5790 6419 2503 3258 2792 1352 2170 3046RMSE(I) 5431 5819 6519 3705 2618 3275 1567 2515 3514MAPE(I) 11.60% 11.52% 11.78% 2.32% 3.04% 2.61% 2.20% 3.70% 5.41%MAE(R) 813 1728 2944 2934 5704 9001 1643 2700 4170RMSE(R) 959 2128 3706 3321 6821 10936 1880 3151 4972MAPE(R) 11.93% 20.07% 31.04% 5.57% 10.00% 14.83% 1.23% 1.96% 2.97%MAE(D) 423 543 927 330 235 318 147 109 95RMSE(D) 527 637 1151 349 279 379 147 122 109MAPE(D) 8.36% 8.98% 12.64% 1.35% 0.95% 1.24% 0.45% 0.34% 0.30%03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date35000400004500050000550006000065000No. of infectitioni(t)-predictioni(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date400060008000100001200014000160001800020000No. of recoveredr(t)-predictionr(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date3000400050006000700080009000100001100012000No. of deathsd(t)-predictiond(t)-observation04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date9500097500100000102500105000107500110000112500115000No. of infectition04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date400005000060000700008000090000No. of recovered04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date230002400025000260002700028000No. of deaths05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date450005000055000600006500070000No. of infectition05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date120000125000130000135000140000145000150000155000160000No. of recovered05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date320003220032400326003280033000No. of deaths7-day forecastingFigure 7: Forecasting results of the SEIRD models based onestimated parameters. In the first column are plotted the pre-dicted current infections, in the second column are plottedthe predicted cumulative recovered, in the third column areplotted the predicted cumulative deaths, and the dotted boxesrepresent the corresponding observations. a. 7-day forecast-ing results based on the February 20 to March 20, 2020 timewindow. b. 7-day forecasting results based on the February20 to April 19, 2020 time window. c. 7-day forecasting resultsbased on the February 20 to May 19, 2020 time window.4 DISCUSSIONTransmission modelling is increasingly being used to support publichealth decision-making in the control of infectious diseases. In thispaper, a modified SEIRD compartmental model with time-varyingparameters is introduced to describe and predict the dynamics ofCOVID-19 transmission in Italy.Estimating the unknown parameters of this model is a complexinverse problem, for the solution of which we proposed a domain-specific PINNs approach.The proposed approach has been applied to modelling the COVID-19 transmission in Italy, the estimated parameters resulted effectivein fitting the COVID-19 contagion data and in providing accuratepredictions of the evolution. Besides, these results, the proposedPINNs approach allows us to have a more detailed understandingof the contagion mechanism.In Fig. 5 (a) is that the control measures imposed by the authori-ties seem to have been effective in reducing the key transmissionrate parameter β(t). Fig. 5 (b) and (c) show that the recovery ratetends to increase with time and the death rate to decrease. Thisphenomenon, which seems not directly related to the lockdown,can be attributed to different causes, among which a better un-derstanding of the disease and consequent improvement in theeffusiveness of the response from the national health system, andpossibly a change in the nature, virulence, and lethality of the virus.Furthermore, we evaluate how the estimated parameters fit theSEIRD compartmental model by comparing the results of previouspublications. We compare our results to those obtained using themethodology of the rolling regression framework [ 4], where theorder of magnitude of the time-varying parameters β(t),γ(t)andμ(t)is in agreement and the trend is almost identical. A compre-hensive meta-analysis demonstrated that the median incubationperiod for general transmissions in early outbreaks was 5.8 days[95% confidence interval (95% CI): 5.3, 6.2] [ 25]. Li et al. analyzeddata on the first 425 confirmed cases in Wuhan to determine the epi-demiologic characteristics of NCIP, the results show that the meanincubation period was 5.2 days (95% confidence interval [CI], 4.1 to7.0) [ 14]. Yang et al. collected contact tracing data in a municipalityin Hubei province during a full outbreak period to estimate theincubation period and serial interval of COVID-19, the estimatedmedian incubation period of COVID-19 is 5.4 days (bootstrapped95% confidence interval (CI) 4.8–6.0) [ 26]. The estimated αby theproposed PINNs approach is 5.8, which is consistent with the re-sults of the above research. The estimated εby the proposed PINNsapproach is 0.99, which means that the transmission capacity ofexposed and onset populations are nearly identical [ 9]. Numer-ous related studies demonstrate that the incubation period and theinfection period carry almost the same capacity for transmission[6, 22].Conference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.The goal of modeling the transmission dynamics of an infec-tious disease is to capture the mechanisms of a host passing onthe infection to other individuals. Once the information is clear,a model can be used as a sort of experimental system to simu-late what would happen to the evolution of disease with differentinterventions implemented. While the proposed PINNs approachindeed offers many advantages, it does have some limitations. Oneof the main limitations is that PINNs architecture requires priorknowledge of the physical laws and constraints that govern theproblem being solved. The structure of compartmental models maychange depending on the question of interest and impact their ac-curacy. That means if the underlying epidemiological laws are notwell understood or if the available data is not consistent with theknown epidemiological laws, the model may not work well. But itshould be noted that the emphasis on infectious disease models ison the application of public health, not the mathematics of thesemodels. As world-renowned Statistician George E. P. Box made thefollowing statement. "All models are wrong, but some are useful."5 CONCLUSIONSIn this paper, we proposed a novel PINNs approach to estimatethe unknown parameters (including time-varying and constantparameters) for the ODEs-based compartmental model to depictthe dynamic of the COVID-19 transmission. The experiment resultwith real-world report data reveals that the proposed COVID-19modeling approach enables to yield of epidemiological models thatcan describe the real-time dynamics of the contagion, providingreliable predictions and valuable insight into the contagion mech-anisms. We have provided a completed workflow for analyzinginfectious disease transmission systems described by a system ofODEs produced compartmental model. We emphasize that the pro-posed PINNs approach can easily be implemented without anybackground knowledge about numerical analysis (for example, sta-bility conditions) but about some libraries for implementing neuralnetworks. For a given scenario that we consider, the proposedPINNs approach can be effective for simulating different epidemicscenarios, testing various hypotheses, and for designing suitablecontrol measures.6 ACKNOWLEDGMENTSThe study was supported by the National Natural Science Founda-tion of China (82041024 to Feng Chen and 81973142 to YongyueWei). This study was also partially supported by the Bill & MelindaGates Foundation (INV-006371).
xb6stKM-O3T
This paper proposes a physics-informed neural network PINN to estimate time-varying parameters for SEIRD compartmental models as well as demonstrate learning the complex dynamics of disease and forecasting accurately.
4: Good paper, accept
The paper is of good quality, clear, and well-written. The authors clearly explained their motivation, addressed the shortcomings of previous methods, and touch upon all the necessary architectures. I would like to bring up a few important points that require attention. 1. I'm a bit confused about the design of the constant variables $\alpha$ and $\epsilon$. In order to obtain a positive value, $\alpha$ is set to a positive multiplication of a hyperbolic tangent function. However, some further clarification would be greatly appreciated. 2. I highly appreciate the way all the methodologies are being explained with proper figures and equations. 3. Based on the data presented in Figure 4, the figures depicting the data fitting during training are in perfect alignment with the observed data points. I am curious if dropouts were utilized in the model and, if not, whether overfitting occurred. It would be greatly appreciated if the authors could provide their insight on this matter. 4. Based on Figure 5, the forecasting accuracy of $I, R, D$ for three different months appears to be relatively consistent when compared to actual observations. 5. One of the pros of this paper is that the authors discussed the main limitation of PINNs and how requirement of prior knowledge could be a constraint while solving problems and potentially may impact accuracy if underlying epidemiological laws are poorly understood or data inconsistencies exist. This paper on PINNs for infectious diseases is commendable, delivering accurate weekly forecasting results. While other studies have explored physics-informed neural networks in various compartmental models, such as SIR, SIRS, and SEIRM, this research stands out by successfully delivering on its initial claims.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
u9zVZTg_Ky
KDD.org/2023/Workshop/epiDAMIK
2023
Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics
["Xiao Ning", "Yongyue Wei", "Feng Chen"]
Modelling and predicting the behaviour of infectious diseases is essential for early warning and evaluating the most effective interventions to prevent significant harm. Compartmental models produce a system of ordinary differential equations (ODEs) that are renowned for simulating the transmission dynamics of infectious diseases. However, the parameters in compartmental models are often unknown, and they can even change over time in the real world, making them difficult to determine. This paper proposes an advanced artificial intelligence approach based on physics-informed neural networks (PINNs) to estimate time-varying parameters from given data for the compartmental model. Our proposed PINNs approach captures the complex dynamics of COVID-19 by integrating a modified Susceptible-Exposed-Infectious-Recovered-Death (SEIRD) compartmental model with deep neural networks. The experimental findings on synthesized data have demonstrated that our method robustly and accurately learns the dynamics and forecasts future states. Moreover, as more data becomes available, our proposed PINNs approach can be successfully extended to other regions and infectious diseases.
["Compartmental models", "COVID-19 transmission", "Physics-informed neural networks", "Forward-inverse problem"]
ABSTRACTModelling and predicting the behaviour of infectious diseases isessential for early warning and evaluating the most effective in-terventions to prevent significant harm. Compartmental modelsproduce a system of ordinary differential equations (ODEs) that arerenowned for simulating the transmission dynamics of infectiousdiseases. However, the parameters in compartmental models areoften unknown, and they can even change over time in the realworld, making them difficult to determine. This paper proposes anadvanced artificial intelligence approach based on physics-informedneural networks (PINNs) to estimate time-varying parameters fromgiven data for the compartmental model. Our proposed PINNsapproach captures the complex dynamics of COVID-19 by integrat-ing a modified Susceptible-Exposed-Infectious-Recovered-Death(SEIRD) compartmental model with deep neural networks. Theexperimental findings on synthesized data have demonstrated thatour method robustly and accurately learns the dynamics and fore-casts future states. Moreover, as more data becomes available, ourproposed PINNs approach can be successfully extended to otherregions and infectious diseases.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSCompartmental models, COVID-19 transmission, Physics-informedneural networks, Forward-inverse problemACM Reference Format:Xiao Ning, Yongyue Wei, and Feng Chen. 2023. Physics-informed neuralnetworks integrating compartmental model for analyzing COVID-19 trans-mission dynamics. In Proceedings of Make sure to enter the correct conference∗corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] acronym ’XX, June 03–05, 2023, Woodstock, NY©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXtitle from your rights confirmation emai (Conference acronym ’XX). ACM,New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONThe emergence of severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) has presented an unprecedented and complex publichealth challenge, with emerging and re-emerging infectious dis-eases posing a significant threat. Compartmental models, governedby a nonlinear system of ordinary differential equations (ODEs),simulate multi-state population transitions by incorporating do-main knowledge and mathematical assumptions to characterize thetransmission dynamics of infectious diseases. These models are apowerful tool for detecting, understanding, and combating infec-tious disease outbreaks and have been widely used to evaluate theimpact of various public health interventions during the COVID-19pandemic [ 24]. However, since real-world data can be inherentlystochastic, noisy, and even inaccessible, model optimization andmethodological innovation are urgently needed to handle imperfectdata and provide early warning of major public health emergencies.Modeling and predicting the behavior of infectious diseases iscrucial for early warning and evaluating effective interventionsto mitigate damage. The first compartmental model, Susceptible-Infectious-Removed (SIR), was proposed by Kermack and McK-endrick to study the epidemics of the Black Death in London andthe plague in Mumbai [ 12]. Compartmental models allow the addi-tion of compartments or transmission parameters to explore andestimate the impact of different assumptions regarding interven-tions. These parameters, included in the compartmental model,determine the transmission progress between different disease sta-tuses and can generate essential characteristics of an epidemic [ 2].Finding the best-fit parameters from the system, given availabledata, is an inverse problem. Several numerical methods have beendeveloped to infer constant model parameters from available data.These methods convert the inverse problem into an optimizationproblem and formulate an estimator by minimizing an objectivefunction. However, since various non-pharmaceutical interventions(NPIs) are employed during the evolution of COVID-19, some modelparameters are time-varying.Identifying time-varying parameters in compartmental mod-els is a complex inverse problem, making it challenging to accu-rately model outbreak dynamics [ 1,10]. Recent advances in Physics-informed machine learning have shown promise in COVID-19 trans-mission modelling by incorporating prior knowledge into deepneural networks to enhance their accuracy and robustness [ 11]. ForConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.example, Kharazmi et al. used PINNs to identify time-dependentparameters and data-driven fractional differential operators in sev-eral epidemiological models [ 13]. Long et al. proposed a variantof PINNs to fit daily reported cases and identify time-varying pa-rameters in the susceptible-infectious-recovered-deceased modelfor the spread of COVID-19 [ 15]. Nascimento et al. introduced anapproach that combines physics-informed and data-driven kernelsto reduce the gap between predictions and observations [ 17]. Caiet al. employed fractional physics-informed neural networks torefine the classical susceptible–exposed–infected–removed (SEIR)model, infer time-dependent parameters, and identify unobserveddynamics of the fractional SEIR model [ 3]. However, most of theseapproaches only consider the transmission rate as a function oftime, while setting other parameters to fixed values. Additionally,they mainly use time-varying parameters for prediction and lack asystematic epidemiological analysis.The primary focus of this paper is to introduce a novel method forevaluating time-varying parameters in ODEs-based compartmentalmodels and to assess the impact of the NPIs based on the estimatedparameters. We constructed a SEIRD compartmental model thattakes an incubation period and the corresponding infectivity intoaccount, including both unknown time-varying and constant pa-rameters. Given many unknown parameters and limited data, wemodeled the system of ODEs as one network and the time-varyingparameters as another network to reduce the parameter of neuralnetworks. Furthermore, such structure of the PINNs approach is inline with the prior epidemiological correlations. We then tested theeffectiveness of our methodology using real-world reported data,simulation experiments showed that our proposed PINNs methodeffectively performs data-driven parameter estimation for mod-elling COVID-19 transmission. Moreover, as more data becomesavailable, it can be successfully extended to model and analyzeinfectious disease transmission dynamics in various regions andfor different infectious diseases.2 METHODOLOGY2.1 Compartmental modelCompartmental models enable the simulation of multi-state popu-lation transitions by incorporating domain knowledge and math-ematical assumptions to characterize the dynamics of infectiousdiseases. These models are generally represented as the followingnonlinear dynamical system: dU(t)dt=F(t,U(t);Ξ)U(t0)=U0(1)where U(t)∈RD(typicallyD≫1) is the state variable, t∈[t0,T]is the time range, U(t0)is the initial state, and Ξstands for theparameters of the dynamical system.The SIR compartmental model provided the simplest frameworkthat matched the reporting structure with the least underlyingassumptions. Many variations of the SIR model have been proposedto analyze the transmission of COVID-19. In this paper, we considera geographical region as isolated from other regions, and withinsuch region we divide the population ( N) of study region into fivecompartments, susceptible ( S, vulnerable to COVID-19 infection),exposed (E, latent individual or asymptomatic infective), infected(I, symptomatic infected), recovered ( R, immune to COVID-19), anddead (D, death due to COVID-19). The details of the SEIRD modelare described below: dS(t)dt=−βS(t)(εE(t))+I(t)NdE(t)dt=βS(t)(εE(t)+I(t))N−E(t)αdI(t)dt=E(t)α−γI(t)−μI(t)dR(t)dt=γI(t)dD(t)dt=μI(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(2)WhereS(t),E(t),I(t),R(t),D(t)denote the number of suscepti-ble, exposed, infectious, recovered, and deceased individuals overtime respectively, along with non-negative initial conditions S(0)=S0,E(0)=E0,I(0)=I0,R(0)=R0,D(0)=D0.β≥0representsthe transmission rate, which represents the probability of infectionper exposure when a susceptible individual ( S) has contact withan infected patient ( I) and becomes a latent exposed individual(E). A coefficient parameter εis introduced since the transmissioncapacity of exposed and onset populations may be different. εβrepresents the potential rate per exposure when a susceptible indi-vidual (S) has mutual contact with an exposed individual ( E), andtransmits it to another exposed individual ( E).αis the averageduration of incubation period, 1/αis the rate of latent individualsbecoming infectious Besides, γ≥0represents the recovery rate,μ≥0represents the death rate, and Nis the total population.The assumption that the parameters in Eqs. 2 are time-constant,which is a highly restrictive and unrealistic one for the real-worldepidemic where various interventions exist. The associated inter-ventions implemented by authorities, and/or mutations of the virus,et al. make the compartmental model require time-varying parame-ters to capture the dynamic of dynamics of COVID-19. Therefore,by considering transmission rate β, recovery rate γand death rateμas functions of time β(t),γ(t),μ(t), the re-written SEIRD modelis as follows: dS(t)dt=−β(t)S(t)(εE(t))+I(t))NdE(t)dt=β(t)S(t)(εE(t))+I(t))N−E(t)αdI(t)dt=E(t)α−γ(t)I(t)−μ(t)I(t)dR(t)dt=γ(t)I(t)dD(t)dt=μ(t)I(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(3)Among them, the five variables S(t),E(t),I(t),R(t),D(t)havethe same meanings as in Eq. 2. If we assume that the total populationNis constant, then the sum of the increase or decrease of the stateof each population is 0, namely,dS(t)dt+dI(t)dt+dR(t)dt+dD(t)dt=0.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYThe basic reproduction number R0is a constant epidemiologicalparameter that provides an estimation of the contagiousness of theinfectious disease. It also serves as a threshold parameter, whenR0>1, one infected individual can trigger an outbreak, while whenR0<1, the infection will not spread in the population. Given acompartmental model, R0can be calculated by the Next GenerationMatrix (NGM) approach [7].If the related parameters in the compartmental model are time-varying as in Eq. 3, the reproduction number R0is expected to keepchanging, as a function of time called the effective reproductionnumberRt.Rtfor the course of SEIRD model using the NGM ap-proach, which yields the following expressions in the proposedSEIRD model:Rt=ε·β(t)α+β(t)γ(t)+μ(t)(4)Rtprovides an estimation of the contagiousness of the infectiousdisease, during the course of an outbreak, where not every individ-ual is considered susceptible.2.2 Deep neural networksDeep neural networks (DNNs) have emerged as a reliable and effec-tive method for nonlinear function approximation, demonstratingremarkable capabilities in scientific computation and engineeringapplications, as evidenced by their widespread utilization. Manytypes of DNNs have been developed such as recurrent neural net-works, convolutional neural networks, and Transformer et al [ 16],and here we only consider fully-connected deep neural networks(FDNN). Neural networks can be viewed as discretizations of contin-uous dynamical systems, making them well-suited for dealing withdynamic systems. Mathematically, an FDNN defines a mapping ofthe formF:x∈Rd=⇒y=F(x)∈Rc, (5)wheredandcare the input and output dimensions, respectively.Generally, a standard neural unit of an FDNN receives an inputx∈Rdand produces an output y∈Rm,y=σ(Wx+b)withW∈Rm×dandb∈Rmbeing weight matrix and bias vector,respectively. σ(·), which is referred to as the activation function,is designed to add element-wise non-linearity to the model. AnFDNN with lhidden layers can be considered a nested compositionof sequential standard neural units. For convenience, we denotethe output of the DNN by y(x;θ)with θstanding for the set of allweights and biases. Specifically, the jthneuron inllayer can beformulated asy[l]j=n[l−1]∑︁k=1w[l]jkσ[l−1](y[l−1]k)+b[l]j, (6)wherey[l−1]krepresents the value of the kthneuron in the l−1layer,n[l−1]represents the number of neurons in the l−1layer,σ[l−1]is the activation function of the l−1layer,w[l]jkis the weightbetween the kthneuron in the l−1layer and the jthneuron in thellayer, andb[l]jis the bias of the jthneuron in the llayer.The nonlinear activation function enhances the ability of DNNto model various non-linear problems, selecting the suitable acti-vation function matters greatly for DNN applied in all domains.Particularly, the activation function has an extremely significantxInputσ...σσσ...σσ............σ...σσfn(x)...f2(x)f1(x)Hidden Layers Output LayerFigure 1: Illustration of the FDNN. A neural network consistsof an input layer (the input x), several hidden layers (com-posed of weights Wl, biasbl, and activation function σ), andan output layer.impact on the success of training PINNs. ReLU activation functionhas been widely used in many deep learning applications due toits dealing well with vanishing gradients problems [ 19]. However,for solving differential equations, the first and second derivativesof the neural networks would serve as inputs to calculate the lossfunction, which means that the activation function of the DNN inPINNs framework requires the second derivative to be satisfied asnon-zero. Definitely, many research works have demonstrated thatsigmoid function and tanh function are suited for effective PINNsframework training tasks.2.3 PINNs for SEIRD modelPhysics-informed neural networks (PINNs) approach is a data-driven approach to approximate the solution of differential equa-tions and estimate unknown parameters. The main idea of PINNs isto integrate a priori knowledge as physical laws or domain exper-tise modelled by differential equations into deep neural networks.Equations in the compartmental model possess coupling and thecoefficients are not independent of each other through the lens ofbiological and epidemics. In this context, we employ two separateDNNs with input tto represent the stats U(t)and time-varying pa-rameters, respectively. For the two unknown constant parameters(α,ε), we designed the modified tanh activation function to repre-sent them. The expression of the tanh function istanh(x)=ex−e−xex+e−x,and the range of values belong to [-1, 1]. Considering that α>0and0≤ε≤1, thus we designed the expression of εastanh(x),the expression of αas21·tanh(x),xis a random sample withuniform distribution generated from the interval [0, 3]. Meanwhile,COVID-19 transmission involves the analysis of real-world data,for which the available data size tends to be small and sparse. Sucha PINNs architecture enables a well-trained model with a limiteddata set.The PINNs framework is required to fit the data and simultane-ously satisfy the equations, whereby the loss function includes twoparts. The first part is the mismatch between the network outputand the available data, and another part is the residual of ODEs. Inthis study, we employ the approximation UN N (t;ΘU)≈U(t)toConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.represent the time-varying SEIRD equations (Eqs 3). The parame-tersΘare optimized to achieve the best fit with the observed data.Considering the available data Ujat timest1,t2,...,tnas trainingpoints (ground truth), the mean squared error (MSE) is calculatedas follows:MSEu=1NN∑︁j=1ˆUNN(tj)−U(tj)2, (7)Another component of the loss function is the residual of the sys-tems of Eqs. 1, we define the residual of equations as RNN(t)=dU(t)dt−F(UN N,t;Ξ). The residual, denoted as R(t;ΘU), serves asa metric for assessing the accuracy of the approximation UNN(t;ΘU)in satisfying the ordinary differential equations (ODEs). Evaluatingthe residual involves computing the time derivative of the neu-ral network output, which can be accomplished using automaticdifferentiation [ 20]. Automatic differentiation is a computationaltechnique that efficiently computes derivatives by applying thechain rule. It breaks down functions into elementary operationsand calculates their derivatives, allowing for accurate and efficientcomputation of the overall function’s derivative with respect to itsinput variables.MSEr=1NN∑︁j=1RNN(tj)2, (8)In summary, the loss function of proposed PINNs approach is de-fined as:L=ωuMSEu+ωrMSEr (9)The weight coefficients, ωu,ωr, in the loss function play a crucialrole in balancing the optimization process between learning fromthe data and satisfying the ODEs. These parameters allow fine-tuning of the model’s behaviour and trade-off between the twoobjectives. By adjusting the values of ωu,ωr, the emphasis can beplaced on either accurately fitting the available data or ensuringthe ODE constraints are well-satisfied.Consequently, this PINNs model strives to minimize the lossfunction, effectively learning the underlying physics encoded inthe ODEs while accurately capturing the patterns and relationshipsin the available data.3 EXPERIMENTSIn this section, we will provide a description of the collected dataand present the results obtained from parameter estimation andpredictions using the proposed PINNs approach.3.1 Data sourceFor the COVID-19 epidemic in Italy, the first official report of in-digenous case was on February 21, 2020 in Lodi province, whileseveral epidemiological-linked cases were traced back to February20, 2020. The data considered in our study is downloaded fromItalian Civil Protection (http://www.protezionecivile.gov.it/media-comunicazione/comunicati-stampa) and Ministry of Health (http://www.salute.gov.it/portale/home.html).It is comprised of commutative infected, recovered, and deceasedcases for the period from February 20, 2020 (day 1), to June 30,2020 (day 132) [ 8]. To avoid weekly fluctuations induced by thework-leisure shift and nature noise in the real-world data, a 7-dayData()fx....................................dSdtdEdtdIdtdRdtdDdt...EE0dNdtNo inflow conditionNSS,RR ID()t()t()t2 1()rN N jMSE tN 2 1() ()NNuj jMSE U t U tNuu rr LM S E M S E ( : , ) updateNN w b,txDNNsautomatic differentiation MinimizeMismatch of data and UNN Residual of ODEsODE s-based ODEs-based SEIRD modelodeFigure 2: Schematic diagram of the PINNs framework for theSEIRD compartmental model with unknown (time-varyingand constant) parameters. The green-shaded DNNs repre-sents the states UN N (t)to fit the available data and infer theunobserved dynamics. The yellow-shaded DNNs representstime-varying parameters β(t),γ(t),μ(t). The two constant pa-rameters (α,ε) are represented by the modified tanh(t)acti-vation function.moving average was used to smooth the reported data by averagingthe values of each day with those of the 7 days before. In order tocontrol the transmission of COVID-19 in Italy, lockdown and manyrestriction measures were implemented from February 23, 2020, asthe developed timeline shown in Fig. 3. All events and interventionsare available from official websites https://mn.gov/governor/covid-19/news/.Key EventsFormal start date of COVID-19: localized lockdown for certain regionsFeb 21 March 8 2022 April 1 10 May 3 18 June 15Ban parks, public gardens, and open-air recreational activityAll non-essential or non-strategic industrial activities are closedLockdown Lockdown LockdownDPCM: initial release of some restriction measuresDPCM: general opening in effect, social distancing and other measures remainFirst DPCM: localized national lockdown, ban of gathering and sports events.National lockdown, commercial activities shutdown11 262020DPCM: general opening in effect, social distancing and other measures remain23First official report caseFigure 3: Timeline of NPIs implemented in Italy to controlCOVID-19. DPCM: Decree of the Prime Minister.3.2 Experimental settingsWe train the PINNs model on a personal laptop running the Win-dows 10 operating system, equipped with an Intel (R) Core (TM)i7-8550U CPU operating at 1.8GHz. We implement the PINNs ap-proach using Python and the PyTorch framework [ 21]. For thenumerical experiment, we train the neural networks using theAdam optimizer with an initial learning rate of 2×10−3and a decayrate of 95%every 2000 epochs. The entire training process takesabout 10 minutes to run 50,000 epochs on all training data, andpredictions can be made within seconds.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NY3.3 Results3.3.1 Data fitting. In this subsection, we present the evaluation ofhow well the estimated parameters fit the SEIRD compartmentalmodel on the available data. Fig.4 shows the fitting of the dynamicof the SEIRD model to the available real-world reported data (after7-day smoothing), which demonstrates that the proposed PINNsapproach can accurately fit the different fluctuations in the data.02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(a)0100002000030000400005000060000700008000090000100000110000No. of current infectiveobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(b)020000400006000080000100000120000140000160000180000No. of recoveredobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(c)05000100001500020000250003000035000No. of deathsobservations7-day rollingPINNs fittedData fitting during trainingFigure 4: Data fitting during training. (a.) Fitting to the avail-able data of current infectious. (b.) Fitting to the availabledata of cumulative recovered. (c.) Fitting to the available dataof cumulative deaths. Dot: observed data. Line: 7-day rollingof observed data. Dashed: PINNs’ prediction of dynamics.3.3.2 Inference. We aim to infer the time-varying parameters β(t),γ(t),μ(t), as well as the constants αandε, through the inverseproblem solving of the SEIRD compartmental model. The incuba-tion period and the infectiousness during this period are parametersspecific to the virus, which can be obtained from clinical case in-formation or inferred using statistical or mathematical modellingbased on available data. In our study, we estimate the incubationperiod of COVID-19 to be approximately 5.8 days, and the infec-tiousness during the incubation period is found to be nearly equalto 99.9% of the infection period.The transmission dynamics of infectious diseases are influencedby multiple factors, such as government interventions, individualbehaviour, and medical resources. In order to accurately modelthe spread of infectious diseases using compartmental models, it isnecessary to update certain parameters over time to account for theevolving impact of interventions. These parameters include β(t),γ(t), andμ(t), which represent the time-varying rates of transmis-sion, recovery, and mortality, respectively. In Figure 5, we presentthe inference results of these time-varying parameters in Italy fromFebruary 20 to June 30, 2020. This analysis provides insights intohow the values of β(t),γ(t), andμ(t)change over the specifiedtime period, reflecting the impact of interventions and other factorson the dynamics of the disease.Note that the events that have an impact on β(t)have to do withpeople’s adaption to preventive interventions and the interactionsamong individuals, whereas μ(t)relates to the availability and ef-fectiveness of health care, as well as on the resource availability inhospitals.γ(t)is known to be a disease-specific parameter (inverseof the infectious period) but is also affected by the capacity of thehealthcare system to accommodate hospitalization. As shown inFig.5 (a), the transmission rate β(t)can fit well with what would beexpected given such events. The earliest traceable first confirmedcase of COVID-19 on February 20, 2020, the authorities of Italystarted imposing the localized lockdown for certain regions on Feb-ruary 23, 2020, these control measures achieved a certain success, asdemonstrated by a significant reduction in transmission rates β(t).As far asγ(t)andμ(t), hospitals’ ability particularly emergencyrooms had a considerable impact. In the context of COVID-19, hos-pitals are at full capacity in the first months of the outbreak, andas months went by, healthcare professionals learned more aboutpossible treatments to treat the disease’s symptoms and effects.This usually results in a decrease in the proportion of individualsthat died from the disease (decrease of μ(t)) and in a decrease inthe recovery time (an increase of γ(t)). As shown in Fig.5 (b) andFig.5 (c), in qualitative terms, was an increasing trend in γ(t)and adecreasing trend in μ(t).The effective reproduction number is a crucial parameter in theSEIRD model that helps to predict the spread of infectious diseases.Rtless than 1 indicates that the transmission of the infectiousdisease will gradually disappear. By monitoring changes in Rtovertime, public health officials can make informed decisions aboutinterventions to control the spread of the disease. Fig. 6 (a) showsthe evolution of Rt=ε·β(t)α+β(t)γ(t)+μ(t)in the proposed SEIRDcompartmental model from February 20 to June 30, 2020. In the firstseveral days of the outbreak, the effective reproduction numberRtwas greater than 8, which resulted in a substantial outbreak.On February 25, Rtgradually decreased as localized lockdown forcertain regions and the awareness of the epidemic. However, Rtwasstill greater than 1, which may be due to the partially incompletelockdown, or the movement of people from northern to southernItaly when the country-wide lockdown was announced but not yetenforced. When the national lockdown was fully operational andstrictly enforced, Rtkeeps decreasing and finally reached below 1.Moreover,Rtsteadily declined at the end of March due to a widertesting campaign that identified more mildly symptomatic infectedindividuals. Since June 15, Rtshows a growing trend due to DPCMdeclaring that general opening was in effect, social distancing, andother measures remained. Additionally, to validate the estimated Rt,a serial Bayesian model was implemented to produce the Rtof ItalyConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(a)0.00.10.20.30.40.50.6(t)transmission rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(b)0.020.040.060.080.10(t)recovery rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(c)0.000.010.020.030.040.050.060.070.08(t)Death rateTime-varying parameters (t), (t), (t)Figure 5: The time-varying transmission rate of SEIRD modelbased on PINNs approach on Italy data from February 20 toJune 30, 2020. (a): transmission rate β(t). (b): recovery rateγ(t). (c): death rate μ(t)at the same time period [ 5], as shown in Fig. 6 (b). Parameters forthe serial interval distribution in the model were set according tothe published literature (mean = 7.5 d; SD = 3.4 d) [ 18,23]. As shownin 6, theRtestimated by the proposed PINNs approach is essentiallythe same as that estimated by the Bayesian model. Besides, the resultof the proposed approach provides a more detailed and accuratecapture of the dynamics.3.3.3 Forecasting. Modeling results can provide reliable feedbackinformation for the authorities to make future decisions. The ODEs-based compartmental model requires determined initial conditionsand model parameters to make predictions. To test the performanceof the proposed PINNs approach, we performed predictions for theearly outbreak of COVID-19 in Italy at one-month, two-month, andthree-month, respectively. As the initial conditions can be obtainedfrom the training data and the model parameters are already cali-brated, we can forecast the epidemic dynamics by performing theforward process. In the prediction part, the value of β(t),γ(t)andμ(t)are assumed to be their final value of the training time window.Fig. 7 displays the one-week prediction and corresponding obser-vations for three time periods produced by using the SEIRD modelwith the estimated parameters. Note that the number of recoveredand death states in the SEIRD model are terminal states, which02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29Date (February 20, 2020 to June 30, 2020)0246810RtEffective reproduction numberFigure 6:Rtin Italy from February 24 to June 30, 2020. (a.)Rt estimated by proposed PINNs approach for SEIRD model.(b.)Rtestimated by serial Bayesian model.means that the changes in the number of recovered and death peo-ple are always non-decreasing. In turn, the infected people may seeperiods of increase and decrease due to it being a state of transition.Fig.7 (a) displays the one-week prediction based on the reporteddata from February 20 to March 20, 2020, Fig.7 (b) displays the one-week prediction based on the reported data from February 20 toApril 19, 2020, and Fig.7 (c) displays the one-week prediction basedon the reported data from February 20 to May 19, 2020. The perfectmatch between the predictions and the observations demonstratesthe parameters inferred by the learned network are very plausible,as well as the generalization ability of the model.Furthermore, to quantitatively test the prediction performanceof the proposed approach, We use three evaluation metrics to makefair and effective comparisons. They are mean absolute error (MAE),root mean square error (RMSE), and mean absolute percentage error(MAPE). The calculation method is shown in Eq. (10)(12)(11).MAE =1nn∑︁i=1|ˆyi−yi|, (10)RMSE =vt1nn∑︁i=1(ˆyi−yi)2, (11)MAPE =1nn∑︁i=1|ˆyi−yi|ˆyi∗100%, (12)Interventions to control COVID-19 keep adjusting, which mayresult in uncertainty, experimental results as represented in Table1show the highly accurate forecasting capability of the proposedapproach.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYTable 1: The forecasting performance in 3-day, 5-day and 7-day.MetricsAfter March 20, 2020 After April 19, 2020 After May 19, 20203-day 5-day 7-day 3-day 5-day 7-day 3-day 5-day 7-dayMAE(I) 5411 5790 6419 2503 3258 2792 1352 2170 3046RMSE(I) 5431 5819 6519 3705 2618 3275 1567 2515 3514MAPE(I) 11.60% 11.52% 11.78% 2.32% 3.04% 2.61% 2.20% 3.70% 5.41%MAE(R) 813 1728 2944 2934 5704 9001 1643 2700 4170RMSE(R) 959 2128 3706 3321 6821 10936 1880 3151 4972MAPE(R) 11.93% 20.07% 31.04% 5.57% 10.00% 14.83% 1.23% 1.96% 2.97%MAE(D) 423 543 927 330 235 318 147 109 95RMSE(D) 527 637 1151 349 279 379 147 122 109MAPE(D) 8.36% 8.98% 12.64% 1.35% 0.95% 1.24% 0.45% 0.34% 0.30%03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date35000400004500050000550006000065000No. of infectitioni(t)-predictioni(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date400060008000100001200014000160001800020000No. of recoveredr(t)-predictionr(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date3000400050006000700080009000100001100012000No. of deathsd(t)-predictiond(t)-observation04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date9500097500100000102500105000107500110000112500115000No. of infectition04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date400005000060000700008000090000No. of recovered04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date230002400025000260002700028000No. of deaths05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date450005000055000600006500070000No. of infectition05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date120000125000130000135000140000145000150000155000160000No. of recovered05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date320003220032400326003280033000No. of deaths7-day forecastingFigure 7: Forecasting results of the SEIRD models based onestimated parameters. In the first column are plotted the pre-dicted current infections, in the second column are plottedthe predicted cumulative recovered, in the third column areplotted the predicted cumulative deaths, and the dotted boxesrepresent the corresponding observations. a. 7-day forecast-ing results based on the February 20 to March 20, 2020 timewindow. b. 7-day forecasting results based on the February20 to April 19, 2020 time window. c. 7-day forecasting resultsbased on the February 20 to May 19, 2020 time window.4 DISCUSSIONTransmission modelling is increasingly being used to support publichealth decision-making in the control of infectious diseases. In thispaper, a modified SEIRD compartmental model with time-varyingparameters is introduced to describe and predict the dynamics ofCOVID-19 transmission in Italy.Estimating the unknown parameters of this model is a complexinverse problem, for the solution of which we proposed a domain-specific PINNs approach.The proposed approach has been applied to modelling the COVID-19 transmission in Italy, the estimated parameters resulted effectivein fitting the COVID-19 contagion data and in providing accuratepredictions of the evolution. Besides, these results, the proposedPINNs approach allows us to have a more detailed understandingof the contagion mechanism.In Fig. 5 (a) is that the control measures imposed by the authori-ties seem to have been effective in reducing the key transmissionrate parameter β(t). Fig. 5 (b) and (c) show that the recovery ratetends to increase with time and the death rate to decrease. Thisphenomenon, which seems not directly related to the lockdown,can be attributed to different causes, among which a better un-derstanding of the disease and consequent improvement in theeffusiveness of the response from the national health system, andpossibly a change in the nature, virulence, and lethality of the virus.Furthermore, we evaluate how the estimated parameters fit theSEIRD compartmental model by comparing the results of previouspublications. We compare our results to those obtained using themethodology of the rolling regression framework [ 4], where theorder of magnitude of the time-varying parameters β(t),γ(t)andμ(t)is in agreement and the trend is almost identical. A compre-hensive meta-analysis demonstrated that the median incubationperiod for general transmissions in early outbreaks was 5.8 days[95% confidence interval (95% CI): 5.3, 6.2] [ 25]. Li et al. analyzeddata on the first 425 confirmed cases in Wuhan to determine the epi-demiologic characteristics of NCIP, the results show that the meanincubation period was 5.2 days (95% confidence interval [CI], 4.1 to7.0) [ 14]. Yang et al. collected contact tracing data in a municipalityin Hubei province during a full outbreak period to estimate theincubation period and serial interval of COVID-19, the estimatedmedian incubation period of COVID-19 is 5.4 days (bootstrapped95% confidence interval (CI) 4.8–6.0) [ 26]. The estimated αby theproposed PINNs approach is 5.8, which is consistent with the re-sults of the above research. The estimated εby the proposed PINNsapproach is 0.99, which means that the transmission capacity ofexposed and onset populations are nearly identical [ 9]. Numer-ous related studies demonstrate that the incubation period and theinfection period carry almost the same capacity for transmission[6, 22].Conference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.The goal of modeling the transmission dynamics of an infec-tious disease is to capture the mechanisms of a host passing onthe infection to other individuals. Once the information is clear,a model can be used as a sort of experimental system to simu-late what would happen to the evolution of disease with differentinterventions implemented. While the proposed PINNs approachindeed offers many advantages, it does have some limitations. Oneof the main limitations is that PINNs architecture requires priorknowledge of the physical laws and constraints that govern theproblem being solved. The structure of compartmental models maychange depending on the question of interest and impact their ac-curacy. That means if the underlying epidemiological laws are notwell understood or if the available data is not consistent with theknown epidemiological laws, the model may not work well. But itshould be noted that the emphasis on infectious disease models ison the application of public health, not the mathematics of thesemodels. As world-renowned Statistician George E. P. Box made thefollowing statement. "All models are wrong, but some are useful."5 CONCLUSIONSIn this paper, we proposed a novel PINNs approach to estimatethe unknown parameters (including time-varying and constantparameters) for the ODEs-based compartmental model to depictthe dynamic of the COVID-19 transmission. The experiment resultwith real-world report data reveals that the proposed COVID-19modeling approach enables to yield of epidemiological models thatcan describe the real-time dynamics of the contagion, providingreliable predictions and valuable insight into the contagion mech-anisms. We have provided a completed workflow for analyzinginfectious disease transmission systems described by a system ofODEs produced compartmental model. We emphasize that the pro-posed PINNs approach can easily be implemented without anybackground knowledge about numerical analysis (for example, sta-bility conditions) but about some libraries for implementing neuralnetworks. For a given scenario that we consider, the proposedPINNs approach can be effective for simulating different epidemicscenarios, testing various hypotheses, and for designing suitablecontrol measures.6 ACKNOWLEDGMENTSThe study was supported by the National Natural Science Founda-tion of China (82041024 to Feng Chen and 81973142 to YongyueWei). This study was also partially supported by the Bill & MelindaGates Foundation (INV-006371).
HHtLa2J7QT
Interesting Idea That Would Benefit From Better Clarity and Justification
2: Marginally below acceptance threshold
This paper proposes the use of physics-informed neural networks (PINNs) to estimate time-varying parameters of ODEs to model transmission dynamics for infectious diseases. Positives: + The idea of modeling the transmission dynamics through a SEIRD model with PINNs is an interesting idea and contribution + The authors contain sufficient background on related work in order to present their contribution, and how their method is formed + The epidemiological analysis throughout the results is much appreciated and provides a deeper appreciation of many of the results obtained + The authors do a great job of contextualizing results with respect to the policies enacted in Italy during the beginning of the global pandemic. This contextualization really helps in understanding the learned trends for the time-varying parameters, and for R_t. Pieces That Could Be Improved: - Grammar and writing clarity throughout the manuscript could be improved significantly. For example, the description of the compartmental model in section 2.1 could be significantly improved for clarity, as it is currently difficult to fully understand the different parameters of the model. This is an issue throughout the paper and makes the paper hard to follow - More information about how evaluation is performed should be provided. As it is written, it is unclear if different data were used for training the models and evaluation (in fact, currently, it seems they are the same data). This could pose an issue with proper validation. - It is not clear whether the reported MAE, RMSE, and MAPE results are sufficiently strong. It would be beneficial to see more baselines to see if the proposed PINN is actually performing well, such as if traditional NNs (such as recurrent neural networks) that only forecast I, R, D without modeling the ODEs perform worse. - There should be more ablations to understand if their proposed changes to the model actually result in meaningful changes. For example, is the PINN-based activation functions for alpha and epsilon meaningful? And do the two models for the ODE and the time-varying parameters of the simulation make a difference compared to one shared model? As these points were not well-motivated in the methods, it would be useful to see their importance in the real experimental results.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
u9zVZTg_Ky
KDD.org/2023/Workshop/epiDAMIK
2023
Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics
["Xiao Ning", "Yongyue Wei", "Feng Chen"]
Modelling and predicting the behaviour of infectious diseases is essential for early warning and evaluating the most effective interventions to prevent significant harm. Compartmental models produce a system of ordinary differential equations (ODEs) that are renowned for simulating the transmission dynamics of infectious diseases. However, the parameters in compartmental models are often unknown, and they can even change over time in the real world, making them difficult to determine. This paper proposes an advanced artificial intelligence approach based on physics-informed neural networks (PINNs) to estimate time-varying parameters from given data for the compartmental model. Our proposed PINNs approach captures the complex dynamics of COVID-19 by integrating a modified Susceptible-Exposed-Infectious-Recovered-Death (SEIRD) compartmental model with deep neural networks. The experimental findings on synthesized data have demonstrated that our method robustly and accurately learns the dynamics and forecasts future states. Moreover, as more data becomes available, our proposed PINNs approach can be successfully extended to other regions and infectious diseases.
["Compartmental models", "COVID-19 transmission", "Physics-informed neural networks", "Forward-inverse problem"]
ABSTRACTModelling and predicting the behaviour of infectious diseases isessential for early warning and evaluating the most effective in-terventions to prevent significant harm. Compartmental modelsproduce a system of ordinary differential equations (ODEs) that arerenowned for simulating the transmission dynamics of infectiousdiseases. However, the parameters in compartmental models areoften unknown, and they can even change over time in the realworld, making them difficult to determine. This paper proposes anadvanced artificial intelligence approach based on physics-informedneural networks (PINNs) to estimate time-varying parameters fromgiven data for the compartmental model. Our proposed PINNsapproach captures the complex dynamics of COVID-19 by integrat-ing a modified Susceptible-Exposed-Infectious-Recovered-Death(SEIRD) compartmental model with deep neural networks. Theexperimental findings on synthesized data have demonstrated thatour method robustly and accurately learns the dynamics and fore-casts future states. Moreover, as more data becomes available, ourproposed PINNs approach can be successfully extended to otherregions and infectious diseases.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSCompartmental models, COVID-19 transmission, Physics-informedneural networks, Forward-inverse problemACM Reference Format:Xiao Ning, Yongyue Wei, and Feng Chen. 2023. Physics-informed neuralnetworks integrating compartmental model for analyzing COVID-19 trans-mission dynamics. In Proceedings of Make sure to enter the correct conference∗corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] acronym ’XX, June 03–05, 2023, Woodstock, NY©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXtitle from your rights confirmation emai (Conference acronym ’XX). ACM,New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONThe emergence of severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) has presented an unprecedented and complex publichealth challenge, with emerging and re-emerging infectious dis-eases posing a significant threat. Compartmental models, governedby a nonlinear system of ordinary differential equations (ODEs),simulate multi-state population transitions by incorporating do-main knowledge and mathematical assumptions to characterize thetransmission dynamics of infectious diseases. These models are apowerful tool for detecting, understanding, and combating infec-tious disease outbreaks and have been widely used to evaluate theimpact of various public health interventions during the COVID-19pandemic [ 24]. However, since real-world data can be inherentlystochastic, noisy, and even inaccessible, model optimization andmethodological innovation are urgently needed to handle imperfectdata and provide early warning of major public health emergencies.Modeling and predicting the behavior of infectious diseases iscrucial for early warning and evaluating effective interventionsto mitigate damage. The first compartmental model, Susceptible-Infectious-Removed (SIR), was proposed by Kermack and McK-endrick to study the epidemics of the Black Death in London andthe plague in Mumbai [ 12]. Compartmental models allow the addi-tion of compartments or transmission parameters to explore andestimate the impact of different assumptions regarding interven-tions. These parameters, included in the compartmental model,determine the transmission progress between different disease sta-tuses and can generate essential characteristics of an epidemic [ 2].Finding the best-fit parameters from the system, given availabledata, is an inverse problem. Several numerical methods have beendeveloped to infer constant model parameters from available data.These methods convert the inverse problem into an optimizationproblem and formulate an estimator by minimizing an objectivefunction. However, since various non-pharmaceutical interventions(NPIs) are employed during the evolution of COVID-19, some modelparameters are time-varying.Identifying time-varying parameters in compartmental mod-els is a complex inverse problem, making it challenging to accu-rately model outbreak dynamics [ 1,10]. Recent advances in Physics-informed machine learning have shown promise in COVID-19 trans-mission modelling by incorporating prior knowledge into deepneural networks to enhance their accuracy and robustness [ 11]. ForConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.example, Kharazmi et al. used PINNs to identify time-dependentparameters and data-driven fractional differential operators in sev-eral epidemiological models [ 13]. Long et al. proposed a variantof PINNs to fit daily reported cases and identify time-varying pa-rameters in the susceptible-infectious-recovered-deceased modelfor the spread of COVID-19 [ 15]. Nascimento et al. introduced anapproach that combines physics-informed and data-driven kernelsto reduce the gap between predictions and observations [ 17]. Caiet al. employed fractional physics-informed neural networks torefine the classical susceptible–exposed–infected–removed (SEIR)model, infer time-dependent parameters, and identify unobserveddynamics of the fractional SEIR model [ 3]. However, most of theseapproaches only consider the transmission rate as a function oftime, while setting other parameters to fixed values. Additionally,they mainly use time-varying parameters for prediction and lack asystematic epidemiological analysis.The primary focus of this paper is to introduce a novel method forevaluating time-varying parameters in ODEs-based compartmentalmodels and to assess the impact of the NPIs based on the estimatedparameters. We constructed a SEIRD compartmental model thattakes an incubation period and the corresponding infectivity intoaccount, including both unknown time-varying and constant pa-rameters. Given many unknown parameters and limited data, wemodeled the system of ODEs as one network and the time-varyingparameters as another network to reduce the parameter of neuralnetworks. Furthermore, such structure of the PINNs approach is inline with the prior epidemiological correlations. We then tested theeffectiveness of our methodology using real-world reported data,simulation experiments showed that our proposed PINNs methodeffectively performs data-driven parameter estimation for mod-elling COVID-19 transmission. Moreover, as more data becomesavailable, it can be successfully extended to model and analyzeinfectious disease transmission dynamics in various regions andfor different infectious diseases.2 METHODOLOGY2.1 Compartmental modelCompartmental models enable the simulation of multi-state popu-lation transitions by incorporating domain knowledge and math-ematical assumptions to characterize the dynamics of infectiousdiseases. These models are generally represented as the followingnonlinear dynamical system: dU(t)dt=F(t,U(t);Ξ)U(t0)=U0(1)where U(t)∈RD(typicallyD≫1) is the state variable, t∈[t0,T]is the time range, U(t0)is the initial state, and Ξstands for theparameters of the dynamical system.The SIR compartmental model provided the simplest frameworkthat matched the reporting structure with the least underlyingassumptions. Many variations of the SIR model have been proposedto analyze the transmission of COVID-19. In this paper, we considera geographical region as isolated from other regions, and withinsuch region we divide the population ( N) of study region into fivecompartments, susceptible ( S, vulnerable to COVID-19 infection),exposed (E, latent individual or asymptomatic infective), infected(I, symptomatic infected), recovered ( R, immune to COVID-19), anddead (D, death due to COVID-19). The details of the SEIRD modelare described below: dS(t)dt=−βS(t)(εE(t))+I(t)NdE(t)dt=βS(t)(εE(t)+I(t))N−E(t)αdI(t)dt=E(t)α−γI(t)−μI(t)dR(t)dt=γI(t)dD(t)dt=μI(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(2)WhereS(t),E(t),I(t),R(t),D(t)denote the number of suscepti-ble, exposed, infectious, recovered, and deceased individuals overtime respectively, along with non-negative initial conditions S(0)=S0,E(0)=E0,I(0)=I0,R(0)=R0,D(0)=D0.β≥0representsthe transmission rate, which represents the probability of infectionper exposure when a susceptible individual ( S) has contact withan infected patient ( I) and becomes a latent exposed individual(E). A coefficient parameter εis introduced since the transmissioncapacity of exposed and onset populations may be different. εβrepresents the potential rate per exposure when a susceptible indi-vidual (S) has mutual contact with an exposed individual ( E), andtransmits it to another exposed individual ( E).αis the averageduration of incubation period, 1/αis the rate of latent individualsbecoming infectious Besides, γ≥0represents the recovery rate,μ≥0represents the death rate, and Nis the total population.The assumption that the parameters in Eqs. 2 are time-constant,which is a highly restrictive and unrealistic one for the real-worldepidemic where various interventions exist. The associated inter-ventions implemented by authorities, and/or mutations of the virus,et al. make the compartmental model require time-varying parame-ters to capture the dynamic of dynamics of COVID-19. Therefore,by considering transmission rate β, recovery rate γand death rateμas functions of time β(t),γ(t),μ(t), the re-written SEIRD modelis as follows: dS(t)dt=−β(t)S(t)(εE(t))+I(t))NdE(t)dt=β(t)S(t)(εE(t))+I(t))N−E(t)αdI(t)dt=E(t)α−γ(t)I(t)−μ(t)I(t)dR(t)dt=γ(t)I(t)dD(t)dt=μ(t)I(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(3)Among them, the five variables S(t),E(t),I(t),R(t),D(t)havethe same meanings as in Eq. 2. If we assume that the total populationNis constant, then the sum of the increase or decrease of the stateof each population is 0, namely,dS(t)dt+dI(t)dt+dR(t)dt+dD(t)dt=0.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYThe basic reproduction number R0is a constant epidemiologicalparameter that provides an estimation of the contagiousness of theinfectious disease. It also serves as a threshold parameter, whenR0>1, one infected individual can trigger an outbreak, while whenR0<1, the infection will not spread in the population. Given acompartmental model, R0can be calculated by the Next GenerationMatrix (NGM) approach [7].If the related parameters in the compartmental model are time-varying as in Eq. 3, the reproduction number R0is expected to keepchanging, as a function of time called the effective reproductionnumberRt.Rtfor the course of SEIRD model using the NGM ap-proach, which yields the following expressions in the proposedSEIRD model:Rt=ε·β(t)α+β(t)γ(t)+μ(t)(4)Rtprovides an estimation of the contagiousness of the infectiousdisease, during the course of an outbreak, where not every individ-ual is considered susceptible.2.2 Deep neural networksDeep neural networks (DNNs) have emerged as a reliable and effec-tive method for nonlinear function approximation, demonstratingremarkable capabilities in scientific computation and engineeringapplications, as evidenced by their widespread utilization. Manytypes of DNNs have been developed such as recurrent neural net-works, convolutional neural networks, and Transformer et al [ 16],and here we only consider fully-connected deep neural networks(FDNN). Neural networks can be viewed as discretizations of contin-uous dynamical systems, making them well-suited for dealing withdynamic systems. Mathematically, an FDNN defines a mapping ofthe formF:x∈Rd=⇒y=F(x)∈Rc, (5)wheredandcare the input and output dimensions, respectively.Generally, a standard neural unit of an FDNN receives an inputx∈Rdand produces an output y∈Rm,y=σ(Wx+b)withW∈Rm×dandb∈Rmbeing weight matrix and bias vector,respectively. σ(·), which is referred to as the activation function,is designed to add element-wise non-linearity to the model. AnFDNN with lhidden layers can be considered a nested compositionof sequential standard neural units. For convenience, we denotethe output of the DNN by y(x;θ)with θstanding for the set of allweights and biases. Specifically, the jthneuron inllayer can beformulated asy[l]j=n[l−1]∑︁k=1w[l]jkσ[l−1](y[l−1]k)+b[l]j, (6)wherey[l−1]krepresents the value of the kthneuron in the l−1layer,n[l−1]represents the number of neurons in the l−1layer,σ[l−1]is the activation function of the l−1layer,w[l]jkis the weightbetween the kthneuron in the l−1layer and the jthneuron in thellayer, andb[l]jis the bias of the jthneuron in the llayer.The nonlinear activation function enhances the ability of DNNto model various non-linear problems, selecting the suitable acti-vation function matters greatly for DNN applied in all domains.Particularly, the activation function has an extremely significantxInputσ...σσσ...σσ............σ...σσfn(x)...f2(x)f1(x)Hidden Layers Output LayerFigure 1: Illustration of the FDNN. A neural network consistsof an input layer (the input x), several hidden layers (com-posed of weights Wl, biasbl, and activation function σ), andan output layer.impact on the success of training PINNs. ReLU activation functionhas been widely used in many deep learning applications due toits dealing well with vanishing gradients problems [ 19]. However,for solving differential equations, the first and second derivativesof the neural networks would serve as inputs to calculate the lossfunction, which means that the activation function of the DNN inPINNs framework requires the second derivative to be satisfied asnon-zero. Definitely, many research works have demonstrated thatsigmoid function and tanh function are suited for effective PINNsframework training tasks.2.3 PINNs for SEIRD modelPhysics-informed neural networks (PINNs) approach is a data-driven approach to approximate the solution of differential equa-tions and estimate unknown parameters. The main idea of PINNs isto integrate a priori knowledge as physical laws or domain exper-tise modelled by differential equations into deep neural networks.Equations in the compartmental model possess coupling and thecoefficients are not independent of each other through the lens ofbiological and epidemics. In this context, we employ two separateDNNs with input tto represent the stats U(t)and time-varying pa-rameters, respectively. For the two unknown constant parameters(α,ε), we designed the modified tanh activation function to repre-sent them. The expression of the tanh function istanh(x)=ex−e−xex+e−x,and the range of values belong to [-1, 1]. Considering that α>0and0≤ε≤1, thus we designed the expression of εastanh(x),the expression of αas21·tanh(x),xis a random sample withuniform distribution generated from the interval [0, 3]. Meanwhile,COVID-19 transmission involves the analysis of real-world data,for which the available data size tends to be small and sparse. Sucha PINNs architecture enables a well-trained model with a limiteddata set.The PINNs framework is required to fit the data and simultane-ously satisfy the equations, whereby the loss function includes twoparts. The first part is the mismatch between the network outputand the available data, and another part is the residual of ODEs. Inthis study, we employ the approximation UN N (t;ΘU)≈U(t)toConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.represent the time-varying SEIRD equations (Eqs 3). The parame-tersΘare optimized to achieve the best fit with the observed data.Considering the available data Ujat timest1,t2,...,tnas trainingpoints (ground truth), the mean squared error (MSE) is calculatedas follows:MSEu=1NN∑︁j=1ˆUNN(tj)−U(tj)2, (7)Another component of the loss function is the residual of the sys-tems of Eqs. 1, we define the residual of equations as RNN(t)=dU(t)dt−F(UN N,t;Ξ). The residual, denoted as R(t;ΘU), serves asa metric for assessing the accuracy of the approximation UNN(t;ΘU)in satisfying the ordinary differential equations (ODEs). Evaluatingthe residual involves computing the time derivative of the neu-ral network output, which can be accomplished using automaticdifferentiation [ 20]. Automatic differentiation is a computationaltechnique that efficiently computes derivatives by applying thechain rule. It breaks down functions into elementary operationsand calculates their derivatives, allowing for accurate and efficientcomputation of the overall function’s derivative with respect to itsinput variables.MSEr=1NN∑︁j=1RNN(tj)2, (8)In summary, the loss function of proposed PINNs approach is de-fined as:L=ωuMSEu+ωrMSEr (9)The weight coefficients, ωu,ωr, in the loss function play a crucialrole in balancing the optimization process between learning fromthe data and satisfying the ODEs. These parameters allow fine-tuning of the model’s behaviour and trade-off between the twoobjectives. By adjusting the values of ωu,ωr, the emphasis can beplaced on either accurately fitting the available data or ensuringthe ODE constraints are well-satisfied.Consequently, this PINNs model strives to minimize the lossfunction, effectively learning the underlying physics encoded inthe ODEs while accurately capturing the patterns and relationshipsin the available data.3 EXPERIMENTSIn this section, we will provide a description of the collected dataand present the results obtained from parameter estimation andpredictions using the proposed PINNs approach.3.1 Data sourceFor the COVID-19 epidemic in Italy, the first official report of in-digenous case was on February 21, 2020 in Lodi province, whileseveral epidemiological-linked cases were traced back to February20, 2020. The data considered in our study is downloaded fromItalian Civil Protection (http://www.protezionecivile.gov.it/media-comunicazione/comunicati-stampa) and Ministry of Health (http://www.salute.gov.it/portale/home.html).It is comprised of commutative infected, recovered, and deceasedcases for the period from February 20, 2020 (day 1), to June 30,2020 (day 132) [ 8]. To avoid weekly fluctuations induced by thework-leisure shift and nature noise in the real-world data, a 7-dayData()fx....................................dSdtdEdtdIdtdRdtdDdt...EE0dNdtNo inflow conditionNSS,RR ID()t()t()t2 1()rN N jMSE tN 2 1() ()NNuj jMSE U t U tNuu rr LM S E M S E ( : , ) updateNN w b,txDNNsautomatic differentiation MinimizeMismatch of data and UNN Residual of ODEsODE s-based ODEs-based SEIRD modelodeFigure 2: Schematic diagram of the PINNs framework for theSEIRD compartmental model with unknown (time-varyingand constant) parameters. The green-shaded DNNs repre-sents the states UN N (t)to fit the available data and infer theunobserved dynamics. The yellow-shaded DNNs representstime-varying parameters β(t),γ(t),μ(t). The two constant pa-rameters (α,ε) are represented by the modified tanh(t)acti-vation function.moving average was used to smooth the reported data by averagingthe values of each day with those of the 7 days before. In order tocontrol the transmission of COVID-19 in Italy, lockdown and manyrestriction measures were implemented from February 23, 2020, asthe developed timeline shown in Fig. 3. All events and interventionsare available from official websites https://mn.gov/governor/covid-19/news/.Key EventsFormal start date of COVID-19: localized lockdown for certain regionsFeb 21 March 8 2022 April 1 10 May 3 18 June 15Ban parks, public gardens, and open-air recreational activityAll non-essential or non-strategic industrial activities are closedLockdown Lockdown LockdownDPCM: initial release of some restriction measuresDPCM: general opening in effect, social distancing and other measures remainFirst DPCM: localized national lockdown, ban of gathering and sports events.National lockdown, commercial activities shutdown11 262020DPCM: general opening in effect, social distancing and other measures remain23First official report caseFigure 3: Timeline of NPIs implemented in Italy to controlCOVID-19. DPCM: Decree of the Prime Minister.3.2 Experimental settingsWe train the PINNs model on a personal laptop running the Win-dows 10 operating system, equipped with an Intel (R) Core (TM)i7-8550U CPU operating at 1.8GHz. We implement the PINNs ap-proach using Python and the PyTorch framework [ 21]. For thenumerical experiment, we train the neural networks using theAdam optimizer with an initial learning rate of 2×10−3and a decayrate of 95%every 2000 epochs. The entire training process takesabout 10 minutes to run 50,000 epochs on all training data, andpredictions can be made within seconds.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NY3.3 Results3.3.1 Data fitting. In this subsection, we present the evaluation ofhow well the estimated parameters fit the SEIRD compartmentalmodel on the available data. Fig.4 shows the fitting of the dynamicof the SEIRD model to the available real-world reported data (after7-day smoothing), which demonstrates that the proposed PINNsapproach can accurately fit the different fluctuations in the data.02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(a)0100002000030000400005000060000700008000090000100000110000No. of current infectiveobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(b)020000400006000080000100000120000140000160000180000No. of recoveredobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(c)05000100001500020000250003000035000No. of deathsobservations7-day rollingPINNs fittedData fitting during trainingFigure 4: Data fitting during training. (a.) Fitting to the avail-able data of current infectious. (b.) Fitting to the availabledata of cumulative recovered. (c.) Fitting to the available dataof cumulative deaths. Dot: observed data. Line: 7-day rollingof observed data. Dashed: PINNs’ prediction of dynamics.3.3.2 Inference. We aim to infer the time-varying parameters β(t),γ(t),μ(t), as well as the constants αandε, through the inverseproblem solving of the SEIRD compartmental model. The incuba-tion period and the infectiousness during this period are parametersspecific to the virus, which can be obtained from clinical case in-formation or inferred using statistical or mathematical modellingbased on available data. In our study, we estimate the incubationperiod of COVID-19 to be approximately 5.8 days, and the infec-tiousness during the incubation period is found to be nearly equalto 99.9% of the infection period.The transmission dynamics of infectious diseases are influencedby multiple factors, such as government interventions, individualbehaviour, and medical resources. In order to accurately modelthe spread of infectious diseases using compartmental models, it isnecessary to update certain parameters over time to account for theevolving impact of interventions. These parameters include β(t),γ(t), andμ(t), which represent the time-varying rates of transmis-sion, recovery, and mortality, respectively. In Figure 5, we presentthe inference results of these time-varying parameters in Italy fromFebruary 20 to June 30, 2020. This analysis provides insights intohow the values of β(t),γ(t), andμ(t)change over the specifiedtime period, reflecting the impact of interventions and other factorson the dynamics of the disease.Note that the events that have an impact on β(t)have to do withpeople’s adaption to preventive interventions and the interactionsamong individuals, whereas μ(t)relates to the availability and ef-fectiveness of health care, as well as on the resource availability inhospitals.γ(t)is known to be a disease-specific parameter (inverseof the infectious period) but is also affected by the capacity of thehealthcare system to accommodate hospitalization. As shown inFig.5 (a), the transmission rate β(t)can fit well with what would beexpected given such events. The earliest traceable first confirmedcase of COVID-19 on February 20, 2020, the authorities of Italystarted imposing the localized lockdown for certain regions on Feb-ruary 23, 2020, these control measures achieved a certain success, asdemonstrated by a significant reduction in transmission rates β(t).As far asγ(t)andμ(t), hospitals’ ability particularly emergencyrooms had a considerable impact. In the context of COVID-19, hos-pitals are at full capacity in the first months of the outbreak, andas months went by, healthcare professionals learned more aboutpossible treatments to treat the disease’s symptoms and effects.This usually results in a decrease in the proportion of individualsthat died from the disease (decrease of μ(t)) and in a decrease inthe recovery time (an increase of γ(t)). As shown in Fig.5 (b) andFig.5 (c), in qualitative terms, was an increasing trend in γ(t)and adecreasing trend in μ(t).The effective reproduction number is a crucial parameter in theSEIRD model that helps to predict the spread of infectious diseases.Rtless than 1 indicates that the transmission of the infectiousdisease will gradually disappear. By monitoring changes in Rtovertime, public health officials can make informed decisions aboutinterventions to control the spread of the disease. Fig. 6 (a) showsthe evolution of Rt=ε·β(t)α+β(t)γ(t)+μ(t)in the proposed SEIRDcompartmental model from February 20 to June 30, 2020. In the firstseveral days of the outbreak, the effective reproduction numberRtwas greater than 8, which resulted in a substantial outbreak.On February 25, Rtgradually decreased as localized lockdown forcertain regions and the awareness of the epidemic. However, Rtwasstill greater than 1, which may be due to the partially incompletelockdown, or the movement of people from northern to southernItaly when the country-wide lockdown was announced but not yetenforced. When the national lockdown was fully operational andstrictly enforced, Rtkeeps decreasing and finally reached below 1.Moreover,Rtsteadily declined at the end of March due to a widertesting campaign that identified more mildly symptomatic infectedindividuals. Since June 15, Rtshows a growing trend due to DPCMdeclaring that general opening was in effect, social distancing, andother measures remained. Additionally, to validate the estimated Rt,a serial Bayesian model was implemented to produce the Rtof ItalyConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(a)0.00.10.20.30.40.50.6(t)transmission rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(b)0.020.040.060.080.10(t)recovery rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(c)0.000.010.020.030.040.050.060.070.08(t)Death rateTime-varying parameters (t), (t), (t)Figure 5: The time-varying transmission rate of SEIRD modelbased on PINNs approach on Italy data from February 20 toJune 30, 2020. (a): transmission rate β(t). (b): recovery rateγ(t). (c): death rate μ(t)at the same time period [ 5], as shown in Fig. 6 (b). Parameters forthe serial interval distribution in the model were set according tothe published literature (mean = 7.5 d; SD = 3.4 d) [ 18,23]. As shownin 6, theRtestimated by the proposed PINNs approach is essentiallythe same as that estimated by the Bayesian model. Besides, the resultof the proposed approach provides a more detailed and accuratecapture of the dynamics.3.3.3 Forecasting. Modeling results can provide reliable feedbackinformation for the authorities to make future decisions. The ODEs-based compartmental model requires determined initial conditionsand model parameters to make predictions. To test the performanceof the proposed PINNs approach, we performed predictions for theearly outbreak of COVID-19 in Italy at one-month, two-month, andthree-month, respectively. As the initial conditions can be obtainedfrom the training data and the model parameters are already cali-brated, we can forecast the epidemic dynamics by performing theforward process. In the prediction part, the value of β(t),γ(t)andμ(t)are assumed to be their final value of the training time window.Fig. 7 displays the one-week prediction and corresponding obser-vations for three time periods produced by using the SEIRD modelwith the estimated parameters. Note that the number of recoveredand death states in the SEIRD model are terminal states, which02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29Date (February 20, 2020 to June 30, 2020)0246810RtEffective reproduction numberFigure 6:Rtin Italy from February 24 to June 30, 2020. (a.)Rt estimated by proposed PINNs approach for SEIRD model.(b.)Rtestimated by serial Bayesian model.means that the changes in the number of recovered and death peo-ple are always non-decreasing. In turn, the infected people may seeperiods of increase and decrease due to it being a state of transition.Fig.7 (a) displays the one-week prediction based on the reporteddata from February 20 to March 20, 2020, Fig.7 (b) displays the one-week prediction based on the reported data from February 20 toApril 19, 2020, and Fig.7 (c) displays the one-week prediction basedon the reported data from February 20 to May 19, 2020. The perfectmatch between the predictions and the observations demonstratesthe parameters inferred by the learned network are very plausible,as well as the generalization ability of the model.Furthermore, to quantitatively test the prediction performanceof the proposed approach, We use three evaluation metrics to makefair and effective comparisons. They are mean absolute error (MAE),root mean square error (RMSE), and mean absolute percentage error(MAPE). The calculation method is shown in Eq. (10)(12)(11).MAE =1nn∑︁i=1|ˆyi−yi|, (10)RMSE =vt1nn∑︁i=1(ˆyi−yi)2, (11)MAPE =1nn∑︁i=1|ˆyi−yi|ˆyi∗100%, (12)Interventions to control COVID-19 keep adjusting, which mayresult in uncertainty, experimental results as represented in Table1show the highly accurate forecasting capability of the proposedapproach.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYTable 1: The forecasting performance in 3-day, 5-day and 7-day.MetricsAfter March 20, 2020 After April 19, 2020 After May 19, 20203-day 5-day 7-day 3-day 5-day 7-day 3-day 5-day 7-dayMAE(I) 5411 5790 6419 2503 3258 2792 1352 2170 3046RMSE(I) 5431 5819 6519 3705 2618 3275 1567 2515 3514MAPE(I) 11.60% 11.52% 11.78% 2.32% 3.04% 2.61% 2.20% 3.70% 5.41%MAE(R) 813 1728 2944 2934 5704 9001 1643 2700 4170RMSE(R) 959 2128 3706 3321 6821 10936 1880 3151 4972MAPE(R) 11.93% 20.07% 31.04% 5.57% 10.00% 14.83% 1.23% 1.96% 2.97%MAE(D) 423 543 927 330 235 318 147 109 95RMSE(D) 527 637 1151 349 279 379 147 122 109MAPE(D) 8.36% 8.98% 12.64% 1.35% 0.95% 1.24% 0.45% 0.34% 0.30%03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date35000400004500050000550006000065000No. of infectitioni(t)-predictioni(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date400060008000100001200014000160001800020000No. of recoveredr(t)-predictionr(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date3000400050006000700080009000100001100012000No. of deathsd(t)-predictiond(t)-observation04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date9500097500100000102500105000107500110000112500115000No. of infectition04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date400005000060000700008000090000No. of recovered04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date230002400025000260002700028000No. of deaths05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date450005000055000600006500070000No. of infectition05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date120000125000130000135000140000145000150000155000160000No. of recovered05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date320003220032400326003280033000No. of deaths7-day forecastingFigure 7: Forecasting results of the SEIRD models based onestimated parameters. In the first column are plotted the pre-dicted current infections, in the second column are plottedthe predicted cumulative recovered, in the third column areplotted the predicted cumulative deaths, and the dotted boxesrepresent the corresponding observations. a. 7-day forecast-ing results based on the February 20 to March 20, 2020 timewindow. b. 7-day forecasting results based on the February20 to April 19, 2020 time window. c. 7-day forecasting resultsbased on the February 20 to May 19, 2020 time window.4 DISCUSSIONTransmission modelling is increasingly being used to support publichealth decision-making in the control of infectious diseases. In thispaper, a modified SEIRD compartmental model with time-varyingparameters is introduced to describe and predict the dynamics ofCOVID-19 transmission in Italy.Estimating the unknown parameters of this model is a complexinverse problem, for the solution of which we proposed a domain-specific PINNs approach.The proposed approach has been applied to modelling the COVID-19 transmission in Italy, the estimated parameters resulted effectivein fitting the COVID-19 contagion data and in providing accuratepredictions of the evolution. Besides, these results, the proposedPINNs approach allows us to have a more detailed understandingof the contagion mechanism.In Fig. 5 (a) is that the control measures imposed by the authori-ties seem to have been effective in reducing the key transmissionrate parameter β(t). Fig. 5 (b) and (c) show that the recovery ratetends to increase with time and the death rate to decrease. Thisphenomenon, which seems not directly related to the lockdown,can be attributed to different causes, among which a better un-derstanding of the disease and consequent improvement in theeffusiveness of the response from the national health system, andpossibly a change in the nature, virulence, and lethality of the virus.Furthermore, we evaluate how the estimated parameters fit theSEIRD compartmental model by comparing the results of previouspublications. We compare our results to those obtained using themethodology of the rolling regression framework [ 4], where theorder of magnitude of the time-varying parameters β(t),γ(t)andμ(t)is in agreement and the trend is almost identical. A compre-hensive meta-analysis demonstrated that the median incubationperiod for general transmissions in early outbreaks was 5.8 days[95% confidence interval (95% CI): 5.3, 6.2] [ 25]. Li et al. analyzeddata on the first 425 confirmed cases in Wuhan to determine the epi-demiologic characteristics of NCIP, the results show that the meanincubation period was 5.2 days (95% confidence interval [CI], 4.1 to7.0) [ 14]. Yang et al. collected contact tracing data in a municipalityin Hubei province during a full outbreak period to estimate theincubation period and serial interval of COVID-19, the estimatedmedian incubation period of COVID-19 is 5.4 days (bootstrapped95% confidence interval (CI) 4.8–6.0) [ 26]. The estimated αby theproposed PINNs approach is 5.8, which is consistent with the re-sults of the above research. The estimated εby the proposed PINNsapproach is 0.99, which means that the transmission capacity ofexposed and onset populations are nearly identical [ 9]. Numer-ous related studies demonstrate that the incubation period and theinfection period carry almost the same capacity for transmission[6, 22].Conference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.The goal of modeling the transmission dynamics of an infec-tious disease is to capture the mechanisms of a host passing onthe infection to other individuals. Once the information is clear,a model can be used as a sort of experimental system to simu-late what would happen to the evolution of disease with differentinterventions implemented. While the proposed PINNs approachindeed offers many advantages, it does have some limitations. Oneof the main limitations is that PINNs architecture requires priorknowledge of the physical laws and constraints that govern theproblem being solved. The structure of compartmental models maychange depending on the question of interest and impact their ac-curacy. That means if the underlying epidemiological laws are notwell understood or if the available data is not consistent with theknown epidemiological laws, the model may not work well. But itshould be noted that the emphasis on infectious disease models ison the application of public health, not the mathematics of thesemodels. As world-renowned Statistician George E. P. Box made thefollowing statement. "All models are wrong, but some are useful."5 CONCLUSIONSIn this paper, we proposed a novel PINNs approach to estimatethe unknown parameters (including time-varying and constantparameters) for the ODEs-based compartmental model to depictthe dynamic of the COVID-19 transmission. The experiment resultwith real-world report data reveals that the proposed COVID-19modeling approach enables to yield of epidemiological models thatcan describe the real-time dynamics of the contagion, providingreliable predictions and valuable insight into the contagion mech-anisms. We have provided a completed workflow for analyzinginfectious disease transmission systems described by a system ofODEs produced compartmental model. We emphasize that the pro-posed PINNs approach can easily be implemented without anybackground knowledge about numerical analysis (for example, sta-bility conditions) but about some libraries for implementing neuralnetworks. For a given scenario that we consider, the proposedPINNs approach can be effective for simulating different epidemicscenarios, testing various hypotheses, and for designing suitablecontrol measures.6 ACKNOWLEDGMENTSThe study was supported by the National Natural Science Founda-tion of China (82041024 to Feng Chen and 81973142 to YongyueWei). This study was also partially supported by the Bill & MelindaGates Foundation (INV-006371).
SooiU6_zYjK
Review
3: Marginally above acceptance threshold
### Summary This work studies using physics-informed neural networks to estimate the unknown parameters of epidemic compartmental models. To achieve this, this work first proposes an extended counterpart model, named SEIRD, to model the dynamics of the COVID-19 pandemic, which takes the Death (D) counts into consideration. This paper then posits that the parameters for the pandemic in different phases are dynamic. Therefore, this paper proposes a graph neural network to fit the reported cases and epidemic model parameters simultaneously. In experiments, this work evaluates the proposed GNN on the reported cases from Italy. The results show that the model is able to fit the reported cases and generate corresponding epidemic model parameters. Lastly, the model is applied to forecast the infected cases. The results show that the model predicts the reported cases reasonably accurately, mostly achieving within 20% relative absolute error. ### Strengths - This paper designs a physics-informed neural network to fit case counts and the parameters in the epidemic model simultaneously and considers the dynamics of the epidemic model parameters. - The empirical study on the reported cases from Italy shows that model fits the reported cases accurately and generates meaningful model parameters. ### Weaknesses - It would be interesting to connect the intepretation of estimated epidemic parameters to the intervention policicies. Figure 3 shows the intervention policies conducted by the government during the pandemic. I wondering whether the estimated parameters can be incorporated to explain the effect of each intervention policy. Can the local changes of estimated parameters be interpreted corresponding to the application of the intervention policy. - The proposed methods needs to be described in more details. For example, in Figure 2, there is a automatic differentiation step to convert the estimated cases to its differentiation. It would be helpful to describe how this step is conducted, since it connects the two conterparts of the neural networks. - Experimental setup is not well described. For example, in data fitting experiments in Section 3.3.1, it would be better for the reader to interprete the results, if the authors can explain the data splitting for fitting the model to the data. Would different data splitting leads to different results? - Comparison with related baselines needs to be incorporated. This paper shows the error of the model regarding forecasting the reported cases of the pandemic. However, the comparison with related baselines, such as other PINN or time series prediction methods, would be helpful to assess to the effect of the proposed model. - Discussion of related work is missing. It would be better to provide a more detailed discussion of previous epidemic models and phisics-informed neural networks.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
u9zVZTg_Ky
KDD.org/2023/Workshop/epiDAMIK
2023
Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics
["Xiao Ning", "Yongyue Wei", "Feng Chen"]
Modelling and predicting the behaviour of infectious diseases is essential for early warning and evaluating the most effective interventions to prevent significant harm. Compartmental models produce a system of ordinary differential equations (ODEs) that are renowned for simulating the transmission dynamics of infectious diseases. However, the parameters in compartmental models are often unknown, and they can even change over time in the real world, making them difficult to determine. This paper proposes an advanced artificial intelligence approach based on physics-informed neural networks (PINNs) to estimate time-varying parameters from given data for the compartmental model. Our proposed PINNs approach captures the complex dynamics of COVID-19 by integrating a modified Susceptible-Exposed-Infectious-Recovered-Death (SEIRD) compartmental model with deep neural networks. The experimental findings on synthesized data have demonstrated that our method robustly and accurately learns the dynamics and forecasts future states. Moreover, as more data becomes available, our proposed PINNs approach can be successfully extended to other regions and infectious diseases.
["Compartmental models", "COVID-19 transmission", "Physics-informed neural networks", "Forward-inverse problem"]
ABSTRACTModelling and predicting the behaviour of infectious diseases isessential for early warning and evaluating the most effective in-terventions to prevent significant harm. Compartmental modelsproduce a system of ordinary differential equations (ODEs) that arerenowned for simulating the transmission dynamics of infectiousdiseases. However, the parameters in compartmental models areoften unknown, and they can even change over time in the realworld, making them difficult to determine. This paper proposes anadvanced artificial intelligence approach based on physics-informedneural networks (PINNs) to estimate time-varying parameters fromgiven data for the compartmental model. Our proposed PINNsapproach captures the complex dynamics of COVID-19 by integrat-ing a modified Susceptible-Exposed-Infectious-Recovered-Death(SEIRD) compartmental model with deep neural networks. Theexperimental findings on synthesized data have demonstrated thatour method robustly and accurately learns the dynamics and fore-casts future states. Moreover, as more data becomes available, ourproposed PINNs approach can be successfully extended to otherregions and infectious diseases.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSCompartmental models, COVID-19 transmission, Physics-informedneural networks, Forward-inverse problemACM Reference Format:Xiao Ning, Yongyue Wei, and Feng Chen. 2023. Physics-informed neuralnetworks integrating compartmental model for analyzing COVID-19 trans-mission dynamics. In Proceedings of Make sure to enter the correct conference∗corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] acronym ’XX, June 03–05, 2023, Woodstock, NY©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXtitle from your rights confirmation emai (Conference acronym ’XX). ACM,New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONThe emergence of severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) has presented an unprecedented and complex publichealth challenge, with emerging and re-emerging infectious dis-eases posing a significant threat. Compartmental models, governedby a nonlinear system of ordinary differential equations (ODEs),simulate multi-state population transitions by incorporating do-main knowledge and mathematical assumptions to characterize thetransmission dynamics of infectious diseases. These models are apowerful tool for detecting, understanding, and combating infec-tious disease outbreaks and have been widely used to evaluate theimpact of various public health interventions during the COVID-19pandemic [ 24]. However, since real-world data can be inherentlystochastic, noisy, and even inaccessible, model optimization andmethodological innovation are urgently needed to handle imperfectdata and provide early warning of major public health emergencies.Modeling and predicting the behavior of infectious diseases iscrucial for early warning and evaluating effective interventionsto mitigate damage. The first compartmental model, Susceptible-Infectious-Removed (SIR), was proposed by Kermack and McK-endrick to study the epidemics of the Black Death in London andthe plague in Mumbai [ 12]. Compartmental models allow the addi-tion of compartments or transmission parameters to explore andestimate the impact of different assumptions regarding interven-tions. These parameters, included in the compartmental model,determine the transmission progress between different disease sta-tuses and can generate essential characteristics of an epidemic [ 2].Finding the best-fit parameters from the system, given availabledata, is an inverse problem. Several numerical methods have beendeveloped to infer constant model parameters from available data.These methods convert the inverse problem into an optimizationproblem and formulate an estimator by minimizing an objectivefunction. However, since various non-pharmaceutical interventions(NPIs) are employed during the evolution of COVID-19, some modelparameters are time-varying.Identifying time-varying parameters in compartmental mod-els is a complex inverse problem, making it challenging to accu-rately model outbreak dynamics [ 1,10]. Recent advances in Physics-informed machine learning have shown promise in COVID-19 trans-mission modelling by incorporating prior knowledge into deepneural networks to enhance their accuracy and robustness [ 11]. ForConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.example, Kharazmi et al. used PINNs to identify time-dependentparameters and data-driven fractional differential operators in sev-eral epidemiological models [ 13]. Long et al. proposed a variantof PINNs to fit daily reported cases and identify time-varying pa-rameters in the susceptible-infectious-recovered-deceased modelfor the spread of COVID-19 [ 15]. Nascimento et al. introduced anapproach that combines physics-informed and data-driven kernelsto reduce the gap between predictions and observations [ 17]. Caiet al. employed fractional physics-informed neural networks torefine the classical susceptible–exposed–infected–removed (SEIR)model, infer time-dependent parameters, and identify unobserveddynamics of the fractional SEIR model [ 3]. However, most of theseapproaches only consider the transmission rate as a function oftime, while setting other parameters to fixed values. Additionally,they mainly use time-varying parameters for prediction and lack asystematic epidemiological analysis.The primary focus of this paper is to introduce a novel method forevaluating time-varying parameters in ODEs-based compartmentalmodels and to assess the impact of the NPIs based on the estimatedparameters. We constructed a SEIRD compartmental model thattakes an incubation period and the corresponding infectivity intoaccount, including both unknown time-varying and constant pa-rameters. Given many unknown parameters and limited data, wemodeled the system of ODEs as one network and the time-varyingparameters as another network to reduce the parameter of neuralnetworks. Furthermore, such structure of the PINNs approach is inline with the prior epidemiological correlations. We then tested theeffectiveness of our methodology using real-world reported data,simulation experiments showed that our proposed PINNs methodeffectively performs data-driven parameter estimation for mod-elling COVID-19 transmission. Moreover, as more data becomesavailable, it can be successfully extended to model and analyzeinfectious disease transmission dynamics in various regions andfor different infectious diseases.2 METHODOLOGY2.1 Compartmental modelCompartmental models enable the simulation of multi-state popu-lation transitions by incorporating domain knowledge and math-ematical assumptions to characterize the dynamics of infectiousdiseases. These models are generally represented as the followingnonlinear dynamical system: dU(t)dt=F(t,U(t);Ξ)U(t0)=U0(1)where U(t)∈RD(typicallyD≫1) is the state variable, t∈[t0,T]is the time range, U(t0)is the initial state, and Ξstands for theparameters of the dynamical system.The SIR compartmental model provided the simplest frameworkthat matched the reporting structure with the least underlyingassumptions. Many variations of the SIR model have been proposedto analyze the transmission of COVID-19. In this paper, we considera geographical region as isolated from other regions, and withinsuch region we divide the population ( N) of study region into fivecompartments, susceptible ( S, vulnerable to COVID-19 infection),exposed (E, latent individual or asymptomatic infective), infected(I, symptomatic infected), recovered ( R, immune to COVID-19), anddead (D, death due to COVID-19). The details of the SEIRD modelare described below: dS(t)dt=−βS(t)(εE(t))+I(t)NdE(t)dt=βS(t)(εE(t)+I(t))N−E(t)αdI(t)dt=E(t)α−γI(t)−μI(t)dR(t)dt=γI(t)dD(t)dt=μI(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(2)WhereS(t),E(t),I(t),R(t),D(t)denote the number of suscepti-ble, exposed, infectious, recovered, and deceased individuals overtime respectively, along with non-negative initial conditions S(0)=S0,E(0)=E0,I(0)=I0,R(0)=R0,D(0)=D0.β≥0representsthe transmission rate, which represents the probability of infectionper exposure when a susceptible individual ( S) has contact withan infected patient ( I) and becomes a latent exposed individual(E). A coefficient parameter εis introduced since the transmissioncapacity of exposed and onset populations may be different. εβrepresents the potential rate per exposure when a susceptible indi-vidual (S) has mutual contact with an exposed individual ( E), andtransmits it to another exposed individual ( E).αis the averageduration of incubation period, 1/αis the rate of latent individualsbecoming infectious Besides, γ≥0represents the recovery rate,μ≥0represents the death rate, and Nis the total population.The assumption that the parameters in Eqs. 2 are time-constant,which is a highly restrictive and unrealistic one for the real-worldepidemic where various interventions exist. The associated inter-ventions implemented by authorities, and/or mutations of the virus,et al. make the compartmental model require time-varying parame-ters to capture the dynamic of dynamics of COVID-19. Therefore,by considering transmission rate β, recovery rate γand death rateμas functions of time β(t),γ(t),μ(t), the re-written SEIRD modelis as follows: dS(t)dt=−β(t)S(t)(εE(t))+I(t))NdE(t)dt=β(t)S(t)(εE(t))+I(t))N−E(t)αdI(t)dt=E(t)α−γ(t)I(t)−μ(t)I(t)dR(t)dt=γ(t)I(t)dD(t)dt=μ(t)I(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(3)Among them, the five variables S(t),E(t),I(t),R(t),D(t)havethe same meanings as in Eq. 2. If we assume that the total populationNis constant, then the sum of the increase or decrease of the stateof each population is 0, namely,dS(t)dt+dI(t)dt+dR(t)dt+dD(t)dt=0.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYThe basic reproduction number R0is a constant epidemiologicalparameter that provides an estimation of the contagiousness of theinfectious disease. It also serves as a threshold parameter, whenR0>1, one infected individual can trigger an outbreak, while whenR0<1, the infection will not spread in the population. Given acompartmental model, R0can be calculated by the Next GenerationMatrix (NGM) approach [7].If the related parameters in the compartmental model are time-varying as in Eq. 3, the reproduction number R0is expected to keepchanging, as a function of time called the effective reproductionnumberRt.Rtfor the course of SEIRD model using the NGM ap-proach, which yields the following expressions in the proposedSEIRD model:Rt=ε·β(t)α+β(t)γ(t)+μ(t)(4)Rtprovides an estimation of the contagiousness of the infectiousdisease, during the course of an outbreak, where not every individ-ual is considered susceptible.2.2 Deep neural networksDeep neural networks (DNNs) have emerged as a reliable and effec-tive method for nonlinear function approximation, demonstratingremarkable capabilities in scientific computation and engineeringapplications, as evidenced by their widespread utilization. Manytypes of DNNs have been developed such as recurrent neural net-works, convolutional neural networks, and Transformer et al [ 16],and here we only consider fully-connected deep neural networks(FDNN). Neural networks can be viewed as discretizations of contin-uous dynamical systems, making them well-suited for dealing withdynamic systems. Mathematically, an FDNN defines a mapping ofthe formF:x∈Rd=⇒y=F(x)∈Rc, (5)wheredandcare the input and output dimensions, respectively.Generally, a standard neural unit of an FDNN receives an inputx∈Rdand produces an output y∈Rm,y=σ(Wx+b)withW∈Rm×dandb∈Rmbeing weight matrix and bias vector,respectively. σ(·), which is referred to as the activation function,is designed to add element-wise non-linearity to the model. AnFDNN with lhidden layers can be considered a nested compositionof sequential standard neural units. For convenience, we denotethe output of the DNN by y(x;θ)with θstanding for the set of allweights and biases. Specifically, the jthneuron inllayer can beformulated asy[l]j=n[l−1]∑︁k=1w[l]jkσ[l−1](y[l−1]k)+b[l]j, (6)wherey[l−1]krepresents the value of the kthneuron in the l−1layer,n[l−1]represents the number of neurons in the l−1layer,σ[l−1]is the activation function of the l−1layer,w[l]jkis the weightbetween the kthneuron in the l−1layer and the jthneuron in thellayer, andb[l]jis the bias of the jthneuron in the llayer.The nonlinear activation function enhances the ability of DNNto model various non-linear problems, selecting the suitable acti-vation function matters greatly for DNN applied in all domains.Particularly, the activation function has an extremely significantxInputσ...σσσ...σσ............σ...σσfn(x)...f2(x)f1(x)Hidden Layers Output LayerFigure 1: Illustration of the FDNN. A neural network consistsof an input layer (the input x), several hidden layers (com-posed of weights Wl, biasbl, and activation function σ), andan output layer.impact on the success of training PINNs. ReLU activation functionhas been widely used in many deep learning applications due toits dealing well with vanishing gradients problems [ 19]. However,for solving differential equations, the first and second derivativesof the neural networks would serve as inputs to calculate the lossfunction, which means that the activation function of the DNN inPINNs framework requires the second derivative to be satisfied asnon-zero. Definitely, many research works have demonstrated thatsigmoid function and tanh function are suited for effective PINNsframework training tasks.2.3 PINNs for SEIRD modelPhysics-informed neural networks (PINNs) approach is a data-driven approach to approximate the solution of differential equa-tions and estimate unknown parameters. The main idea of PINNs isto integrate a priori knowledge as physical laws or domain exper-tise modelled by differential equations into deep neural networks.Equations in the compartmental model possess coupling and thecoefficients are not independent of each other through the lens ofbiological and epidemics. In this context, we employ two separateDNNs with input tto represent the stats U(t)and time-varying pa-rameters, respectively. For the two unknown constant parameters(α,ε), we designed the modified tanh activation function to repre-sent them. The expression of the tanh function istanh(x)=ex−e−xex+e−x,and the range of values belong to [-1, 1]. Considering that α>0and0≤ε≤1, thus we designed the expression of εastanh(x),the expression of αas21·tanh(x),xis a random sample withuniform distribution generated from the interval [0, 3]. Meanwhile,COVID-19 transmission involves the analysis of real-world data,for which the available data size tends to be small and sparse. Sucha PINNs architecture enables a well-trained model with a limiteddata set.The PINNs framework is required to fit the data and simultane-ously satisfy the equations, whereby the loss function includes twoparts. The first part is the mismatch between the network outputand the available data, and another part is the residual of ODEs. Inthis study, we employ the approximation UN N (t;ΘU)≈U(t)toConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.represent the time-varying SEIRD equations (Eqs 3). The parame-tersΘare optimized to achieve the best fit with the observed data.Considering the available data Ujat timest1,t2,...,tnas trainingpoints (ground truth), the mean squared error (MSE) is calculatedas follows:MSEu=1NN∑︁j=1ˆUNN(tj)−U(tj)2, (7)Another component of the loss function is the residual of the sys-tems of Eqs. 1, we define the residual of equations as RNN(t)=dU(t)dt−F(UN N,t;Ξ). The residual, denoted as R(t;ΘU), serves asa metric for assessing the accuracy of the approximation UNN(t;ΘU)in satisfying the ordinary differential equations (ODEs). Evaluatingthe residual involves computing the time derivative of the neu-ral network output, which can be accomplished using automaticdifferentiation [ 20]. Automatic differentiation is a computationaltechnique that efficiently computes derivatives by applying thechain rule. It breaks down functions into elementary operationsand calculates their derivatives, allowing for accurate and efficientcomputation of the overall function’s derivative with respect to itsinput variables.MSEr=1NN∑︁j=1RNN(tj)2, (8)In summary, the loss function of proposed PINNs approach is de-fined as:L=ωuMSEu+ωrMSEr (9)The weight coefficients, ωu,ωr, in the loss function play a crucialrole in balancing the optimization process between learning fromthe data and satisfying the ODEs. These parameters allow fine-tuning of the model’s behaviour and trade-off between the twoobjectives. By adjusting the values of ωu,ωr, the emphasis can beplaced on either accurately fitting the available data or ensuringthe ODE constraints are well-satisfied.Consequently, this PINNs model strives to minimize the lossfunction, effectively learning the underlying physics encoded inthe ODEs while accurately capturing the patterns and relationshipsin the available data.3 EXPERIMENTSIn this section, we will provide a description of the collected dataand present the results obtained from parameter estimation andpredictions using the proposed PINNs approach.3.1 Data sourceFor the COVID-19 epidemic in Italy, the first official report of in-digenous case was on February 21, 2020 in Lodi province, whileseveral epidemiological-linked cases were traced back to February20, 2020. The data considered in our study is downloaded fromItalian Civil Protection (http://www.protezionecivile.gov.it/media-comunicazione/comunicati-stampa) and Ministry of Health (http://www.salute.gov.it/portale/home.html).It is comprised of commutative infected, recovered, and deceasedcases for the period from February 20, 2020 (day 1), to June 30,2020 (day 132) [ 8]. To avoid weekly fluctuations induced by thework-leisure shift and nature noise in the real-world data, a 7-dayData()fx....................................dSdtdEdtdIdtdRdtdDdt...EE0dNdtNo inflow conditionNSS,RR ID()t()t()t2 1()rN N jMSE tN 2 1() ()NNuj jMSE U t U tNuu rr LM S E M S E ( : , ) updateNN w b,txDNNsautomatic differentiation MinimizeMismatch of data and UNN Residual of ODEsODE s-based ODEs-based SEIRD modelodeFigure 2: Schematic diagram of the PINNs framework for theSEIRD compartmental model with unknown (time-varyingand constant) parameters. The green-shaded DNNs repre-sents the states UN N (t)to fit the available data and infer theunobserved dynamics. The yellow-shaded DNNs representstime-varying parameters β(t),γ(t),μ(t). The two constant pa-rameters (α,ε) are represented by the modified tanh(t)acti-vation function.moving average was used to smooth the reported data by averagingthe values of each day with those of the 7 days before. In order tocontrol the transmission of COVID-19 in Italy, lockdown and manyrestriction measures were implemented from February 23, 2020, asthe developed timeline shown in Fig. 3. All events and interventionsare available from official websites https://mn.gov/governor/covid-19/news/.Key EventsFormal start date of COVID-19: localized lockdown for certain regionsFeb 21 March 8 2022 April 1 10 May 3 18 June 15Ban parks, public gardens, and open-air recreational activityAll non-essential or non-strategic industrial activities are closedLockdown Lockdown LockdownDPCM: initial release of some restriction measuresDPCM: general opening in effect, social distancing and other measures remainFirst DPCM: localized national lockdown, ban of gathering and sports events.National lockdown, commercial activities shutdown11 262020DPCM: general opening in effect, social distancing and other measures remain23First official report caseFigure 3: Timeline of NPIs implemented in Italy to controlCOVID-19. DPCM: Decree of the Prime Minister.3.2 Experimental settingsWe train the PINNs model on a personal laptop running the Win-dows 10 operating system, equipped with an Intel (R) Core (TM)i7-8550U CPU operating at 1.8GHz. We implement the PINNs ap-proach using Python and the PyTorch framework [ 21]. For thenumerical experiment, we train the neural networks using theAdam optimizer with an initial learning rate of 2×10−3and a decayrate of 95%every 2000 epochs. The entire training process takesabout 10 minutes to run 50,000 epochs on all training data, andpredictions can be made within seconds.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NY3.3 Results3.3.1 Data fitting. In this subsection, we present the evaluation ofhow well the estimated parameters fit the SEIRD compartmentalmodel on the available data. Fig.4 shows the fitting of the dynamicof the SEIRD model to the available real-world reported data (after7-day smoothing), which demonstrates that the proposed PINNsapproach can accurately fit the different fluctuations in the data.02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(a)0100002000030000400005000060000700008000090000100000110000No. of current infectiveobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(b)020000400006000080000100000120000140000160000180000No. of recoveredobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(c)05000100001500020000250003000035000No. of deathsobservations7-day rollingPINNs fittedData fitting during trainingFigure 4: Data fitting during training. (a.) Fitting to the avail-able data of current infectious. (b.) Fitting to the availabledata of cumulative recovered. (c.) Fitting to the available dataof cumulative deaths. Dot: observed data. Line: 7-day rollingof observed data. Dashed: PINNs’ prediction of dynamics.3.3.2 Inference. We aim to infer the time-varying parameters β(t),γ(t),μ(t), as well as the constants αandε, through the inverseproblem solving of the SEIRD compartmental model. The incuba-tion period and the infectiousness during this period are parametersspecific to the virus, which can be obtained from clinical case in-formation or inferred using statistical or mathematical modellingbased on available data. In our study, we estimate the incubationperiod of COVID-19 to be approximately 5.8 days, and the infec-tiousness during the incubation period is found to be nearly equalto 99.9% of the infection period.The transmission dynamics of infectious diseases are influencedby multiple factors, such as government interventions, individualbehaviour, and medical resources. In order to accurately modelthe spread of infectious diseases using compartmental models, it isnecessary to update certain parameters over time to account for theevolving impact of interventions. These parameters include β(t),γ(t), andμ(t), which represent the time-varying rates of transmis-sion, recovery, and mortality, respectively. In Figure 5, we presentthe inference results of these time-varying parameters in Italy fromFebruary 20 to June 30, 2020. This analysis provides insights intohow the values of β(t),γ(t), andμ(t)change over the specifiedtime period, reflecting the impact of interventions and other factorson the dynamics of the disease.Note that the events that have an impact on β(t)have to do withpeople’s adaption to preventive interventions and the interactionsamong individuals, whereas μ(t)relates to the availability and ef-fectiveness of health care, as well as on the resource availability inhospitals.γ(t)is known to be a disease-specific parameter (inverseof the infectious period) but is also affected by the capacity of thehealthcare system to accommodate hospitalization. As shown inFig.5 (a), the transmission rate β(t)can fit well with what would beexpected given such events. The earliest traceable first confirmedcase of COVID-19 on February 20, 2020, the authorities of Italystarted imposing the localized lockdown for certain regions on Feb-ruary 23, 2020, these control measures achieved a certain success, asdemonstrated by a significant reduction in transmission rates β(t).As far asγ(t)andμ(t), hospitals’ ability particularly emergencyrooms had a considerable impact. In the context of COVID-19, hos-pitals are at full capacity in the first months of the outbreak, andas months went by, healthcare professionals learned more aboutpossible treatments to treat the disease’s symptoms and effects.This usually results in a decrease in the proportion of individualsthat died from the disease (decrease of μ(t)) and in a decrease inthe recovery time (an increase of γ(t)). As shown in Fig.5 (b) andFig.5 (c), in qualitative terms, was an increasing trend in γ(t)and adecreasing trend in μ(t).The effective reproduction number is a crucial parameter in theSEIRD model that helps to predict the spread of infectious diseases.Rtless than 1 indicates that the transmission of the infectiousdisease will gradually disappear. By monitoring changes in Rtovertime, public health officials can make informed decisions aboutinterventions to control the spread of the disease. Fig. 6 (a) showsthe evolution of Rt=ε·β(t)α+β(t)γ(t)+μ(t)in the proposed SEIRDcompartmental model from February 20 to June 30, 2020. In the firstseveral days of the outbreak, the effective reproduction numberRtwas greater than 8, which resulted in a substantial outbreak.On February 25, Rtgradually decreased as localized lockdown forcertain regions and the awareness of the epidemic. However, Rtwasstill greater than 1, which may be due to the partially incompletelockdown, or the movement of people from northern to southernItaly when the country-wide lockdown was announced but not yetenforced. When the national lockdown was fully operational andstrictly enforced, Rtkeeps decreasing and finally reached below 1.Moreover,Rtsteadily declined at the end of March due to a widertesting campaign that identified more mildly symptomatic infectedindividuals. Since June 15, Rtshows a growing trend due to DPCMdeclaring that general opening was in effect, social distancing, andother measures remained. Additionally, to validate the estimated Rt,a serial Bayesian model was implemented to produce the Rtof ItalyConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(a)0.00.10.20.30.40.50.6(t)transmission rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(b)0.020.040.060.080.10(t)recovery rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(c)0.000.010.020.030.040.050.060.070.08(t)Death rateTime-varying parameters (t), (t), (t)Figure 5: The time-varying transmission rate of SEIRD modelbased on PINNs approach on Italy data from February 20 toJune 30, 2020. (a): transmission rate β(t). (b): recovery rateγ(t). (c): death rate μ(t)at the same time period [ 5], as shown in Fig. 6 (b). Parameters forthe serial interval distribution in the model were set according tothe published literature (mean = 7.5 d; SD = 3.4 d) [ 18,23]. As shownin 6, theRtestimated by the proposed PINNs approach is essentiallythe same as that estimated by the Bayesian model. Besides, the resultof the proposed approach provides a more detailed and accuratecapture of the dynamics.3.3.3 Forecasting. Modeling results can provide reliable feedbackinformation for the authorities to make future decisions. The ODEs-based compartmental model requires determined initial conditionsand model parameters to make predictions. To test the performanceof the proposed PINNs approach, we performed predictions for theearly outbreak of COVID-19 in Italy at one-month, two-month, andthree-month, respectively. As the initial conditions can be obtainedfrom the training data and the model parameters are already cali-brated, we can forecast the epidemic dynamics by performing theforward process. In the prediction part, the value of β(t),γ(t)andμ(t)are assumed to be their final value of the training time window.Fig. 7 displays the one-week prediction and corresponding obser-vations for three time periods produced by using the SEIRD modelwith the estimated parameters. Note that the number of recoveredand death states in the SEIRD model are terminal states, which02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29Date (February 20, 2020 to June 30, 2020)0246810RtEffective reproduction numberFigure 6:Rtin Italy from February 24 to June 30, 2020. (a.)Rt estimated by proposed PINNs approach for SEIRD model.(b.)Rtestimated by serial Bayesian model.means that the changes in the number of recovered and death peo-ple are always non-decreasing. In turn, the infected people may seeperiods of increase and decrease due to it being a state of transition.Fig.7 (a) displays the one-week prediction based on the reporteddata from February 20 to March 20, 2020, Fig.7 (b) displays the one-week prediction based on the reported data from February 20 toApril 19, 2020, and Fig.7 (c) displays the one-week prediction basedon the reported data from February 20 to May 19, 2020. The perfectmatch between the predictions and the observations demonstratesthe parameters inferred by the learned network are very plausible,as well as the generalization ability of the model.Furthermore, to quantitatively test the prediction performanceof the proposed approach, We use three evaluation metrics to makefair and effective comparisons. They are mean absolute error (MAE),root mean square error (RMSE), and mean absolute percentage error(MAPE). The calculation method is shown in Eq. (10)(12)(11).MAE =1nn∑︁i=1|ˆyi−yi|, (10)RMSE =vt1nn∑︁i=1(ˆyi−yi)2, (11)MAPE =1nn∑︁i=1|ˆyi−yi|ˆyi∗100%, (12)Interventions to control COVID-19 keep adjusting, which mayresult in uncertainty, experimental results as represented in Table1show the highly accurate forecasting capability of the proposedapproach.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYTable 1: The forecasting performance in 3-day, 5-day and 7-day.MetricsAfter March 20, 2020 After April 19, 2020 After May 19, 20203-day 5-day 7-day 3-day 5-day 7-day 3-day 5-day 7-dayMAE(I) 5411 5790 6419 2503 3258 2792 1352 2170 3046RMSE(I) 5431 5819 6519 3705 2618 3275 1567 2515 3514MAPE(I) 11.60% 11.52% 11.78% 2.32% 3.04% 2.61% 2.20% 3.70% 5.41%MAE(R) 813 1728 2944 2934 5704 9001 1643 2700 4170RMSE(R) 959 2128 3706 3321 6821 10936 1880 3151 4972MAPE(R) 11.93% 20.07% 31.04% 5.57% 10.00% 14.83% 1.23% 1.96% 2.97%MAE(D) 423 543 927 330 235 318 147 109 95RMSE(D) 527 637 1151 349 279 379 147 122 109MAPE(D) 8.36% 8.98% 12.64% 1.35% 0.95% 1.24% 0.45% 0.34% 0.30%03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date35000400004500050000550006000065000No. of infectitioni(t)-predictioni(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date400060008000100001200014000160001800020000No. of recoveredr(t)-predictionr(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date3000400050006000700080009000100001100012000No. of deathsd(t)-predictiond(t)-observation04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date9500097500100000102500105000107500110000112500115000No. of infectition04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date400005000060000700008000090000No. of recovered04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date230002400025000260002700028000No. of deaths05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date450005000055000600006500070000No. of infectition05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date120000125000130000135000140000145000150000155000160000No. of recovered05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date320003220032400326003280033000No. of deaths7-day forecastingFigure 7: Forecasting results of the SEIRD models based onestimated parameters. In the first column are plotted the pre-dicted current infections, in the second column are plottedthe predicted cumulative recovered, in the third column areplotted the predicted cumulative deaths, and the dotted boxesrepresent the corresponding observations. a. 7-day forecast-ing results based on the February 20 to March 20, 2020 timewindow. b. 7-day forecasting results based on the February20 to April 19, 2020 time window. c. 7-day forecasting resultsbased on the February 20 to May 19, 2020 time window.4 DISCUSSIONTransmission modelling is increasingly being used to support publichealth decision-making in the control of infectious diseases. In thispaper, a modified SEIRD compartmental model with time-varyingparameters is introduced to describe and predict the dynamics ofCOVID-19 transmission in Italy.Estimating the unknown parameters of this model is a complexinverse problem, for the solution of which we proposed a domain-specific PINNs approach.The proposed approach has been applied to modelling the COVID-19 transmission in Italy, the estimated parameters resulted effectivein fitting the COVID-19 contagion data and in providing accuratepredictions of the evolution. Besides, these results, the proposedPINNs approach allows us to have a more detailed understandingof the contagion mechanism.In Fig. 5 (a) is that the control measures imposed by the authori-ties seem to have been effective in reducing the key transmissionrate parameter β(t). Fig. 5 (b) and (c) show that the recovery ratetends to increase with time and the death rate to decrease. Thisphenomenon, which seems not directly related to the lockdown,can be attributed to different causes, among which a better un-derstanding of the disease and consequent improvement in theeffusiveness of the response from the national health system, andpossibly a change in the nature, virulence, and lethality of the virus.Furthermore, we evaluate how the estimated parameters fit theSEIRD compartmental model by comparing the results of previouspublications. We compare our results to those obtained using themethodology of the rolling regression framework [ 4], where theorder of magnitude of the time-varying parameters β(t),γ(t)andμ(t)is in agreement and the trend is almost identical. A compre-hensive meta-analysis demonstrated that the median incubationperiod for general transmissions in early outbreaks was 5.8 days[95% confidence interval (95% CI): 5.3, 6.2] [ 25]. Li et al. analyzeddata on the first 425 confirmed cases in Wuhan to determine the epi-demiologic characteristics of NCIP, the results show that the meanincubation period was 5.2 days (95% confidence interval [CI], 4.1 to7.0) [ 14]. Yang et al. collected contact tracing data in a municipalityin Hubei province during a full outbreak period to estimate theincubation period and serial interval of COVID-19, the estimatedmedian incubation period of COVID-19 is 5.4 days (bootstrapped95% confidence interval (CI) 4.8–6.0) [ 26]. The estimated αby theproposed PINNs approach is 5.8, which is consistent with the re-sults of the above research. The estimated εby the proposed PINNsapproach is 0.99, which means that the transmission capacity ofexposed and onset populations are nearly identical [ 9]. Numer-ous related studies demonstrate that the incubation period and theinfection period carry almost the same capacity for transmission[6, 22].Conference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.The goal of modeling the transmission dynamics of an infec-tious disease is to capture the mechanisms of a host passing onthe infection to other individuals. Once the information is clear,a model can be used as a sort of experimental system to simu-late what would happen to the evolution of disease with differentinterventions implemented. While the proposed PINNs approachindeed offers many advantages, it does have some limitations. Oneof the main limitations is that PINNs architecture requires priorknowledge of the physical laws and constraints that govern theproblem being solved. The structure of compartmental models maychange depending on the question of interest and impact their ac-curacy. That means if the underlying epidemiological laws are notwell understood or if the available data is not consistent with theknown epidemiological laws, the model may not work well. But itshould be noted that the emphasis on infectious disease models ison the application of public health, not the mathematics of thesemodels. As world-renowned Statistician George E. P. Box made thefollowing statement. "All models are wrong, but some are useful."5 CONCLUSIONSIn this paper, we proposed a novel PINNs approach to estimatethe unknown parameters (including time-varying and constantparameters) for the ODEs-based compartmental model to depictthe dynamic of the COVID-19 transmission. The experiment resultwith real-world report data reveals that the proposed COVID-19modeling approach enables to yield of epidemiological models thatcan describe the real-time dynamics of the contagion, providingreliable predictions and valuable insight into the contagion mech-anisms. We have provided a completed workflow for analyzinginfectious disease transmission systems described by a system ofODEs produced compartmental model. We emphasize that the pro-posed PINNs approach can easily be implemented without anybackground knowledge about numerical analysis (for example, sta-bility conditions) but about some libraries for implementing neuralnetworks. For a given scenario that we consider, the proposedPINNs approach can be effective for simulating different epidemicscenarios, testing various hypotheses, and for designing suitablecontrol measures.6 ACKNOWLEDGMENTSThe study was supported by the National Natural Science Founda-tion of China (82041024 to Feng Chen and 81973142 to YongyueWei). This study was also partially supported by the Bill & MelindaGates Foundation (INV-006371).
ZTv2PLP1JmS
Interesting but not significant
1: Ok but not good enough - rejection
Review Summary: This paper presents an approach utilizing physics-informed neural networks (PINNs) to estimate time-varying parameters in compartmental models for infectious diseases. The authors successfully integrate the SEIRD model with deep neural networks to capture the dynamics of COVID-19, demonstrating proficient learning and accurate future state predictions using the PINNs approach. The results showcase the potential applicability of this method to various regions and infectious diseases. Nonetheless, the absence of comparative analysis with existing methods and the suboptimal forecasting performance depicted in Figure 7 and Table 1 raise notable concerns. Comparable studies (references [1] and [2] which are both compartmental model + deep neural networks for COVID-19 dynamics) have achieved superior performance with simpler compartmental models, specifically the SIRD model instead of the SEIRD model. Additional empirical evidence or theoretical support is imperative to substantiate the significance of this work. Pros: 1. Introduction of an advanced artificial intelligence approach based on physics-informed neural networks for estimating time-varying parameters in compartmental models. 2. Integration of the SEIRD model with deep neural networks to capture the complex dynamics of COVID-19. 3. The potential applicability of the proposed approach to other regions and infectious diseases. Cons: 1. Lack of comparison with existing methods and inadequate rationale behind the proposed model. 2. Suboptimal performance was observed in the forecasting results presented in Figure 7 and Table 1. 3. Similar studies (e.g., references [1] and [2]) have achieved superior performance. For instance, in [1], the mean absolute error (MAE) of parameter I for 3-day forecasting is reported as 29.57. In [2], the MAE of I for 3-day forecasting is documented as 251.73 and 200.24. Conversely, in this work, the MAE of I for 3-day forecasting is significantly larger, ranging from 5411 to 1352. Without substantial empirical evidence or theoretical support to establish the significance of this work, I am inclined to believe that its quality and significance may not meet the criteria for acceptance. [1] Ning, Xiao, et al. "Epi-DNNs: Epidemiological priors informed deep neural networks for modeling COVID-19 dynamics." Computers in biology and medicine 158 (2023): 106693. [2] Ning, Xiao, et al. "Euler iteration augmented physics-informed neural networks for time-varying parameter estimation of the epidemic compartmental model." Frontiers in Physics 10 (2022): 1300.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Unyf3QsNmx
KDD.org/2023/Workshop/epiDAMIK
2023
Hierarchical Clustering and Multivariate Forecasting for Health Econometrics
["Atika Rahman Paddo", "Sadia Afreen", "Saptarshi Purkayastha"]
Data science approaches in Health Econometrics and Public Health research are limited, with a lack of exploration of state-of-the-art computational methods. Recent studies have shown that neural networks and machine learning methods outperform traditional statistical methods in forecasting and time-series analysis. In this study, we demonstrate the use of unsupervised and supervised machine learning approaches to create "what-if" scenarios for forecasting the long-term impact of changes in socio-economic indicators on health indicators. These indicators include basic sanitation services, immunization, population ages, life expectancy, and domestic health expenditure. To begin, we utilized Hierarchical Cluster Analysis to group 131 countries into 9 clusters based on various indicators from the World Bank Health Statistics and Nutrition dataset. This step allowed us to create clusters of countries. In order to showcase the feasibility of our approach, we performed a time series analysis using multivariate prophet on the most significant features from a cluster consisting of Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia. The study developed robust models (𝑅2 = 0.93+) capable of forecasting 11 health indicators up to 10 years into the future. By employing these "what-if" scenarios and forecasting models, policymakers and healthcare practitioners can make informed decisions and effectively implement targeted interventions to address health-related challenges.
["Clustering", "forecasting", "health econometrics", "data science"]
ABSTRACTData science approaches in Health Econometrics and Public Healthresearch are limited, with a lack of exploration of state-of-the-artcomputational methods. Recent studies have shown that neuralnetworks and machine learning methods outperform traditional sta-tistical methods in forecasting and time-series analysis. In this study,we demonstrate the use of unsupervised and supervised machinelearning approaches to create "what-if" scenarios for forecasting thelong-term impact of changes in socio-economic indicators on healthindicators. These indicators include basic sanitation services, im-munization, population ages, life expectancy, and domestic healthexpenditure. To begin, we utilized Hierarchical Cluster Analysisto group 131 countries into 9 clusters based on various indicatorsfrom the World Bank Health Statistics and Nutrition dataset. Thisstep allowed us to create clusters of countries. In order to showcasethe feasibility of our approach, we performed a time series analysisusing multivariate prophet on the most significant features froma cluster consisting of Bahrain, Kuwait, Oman, Qatar, and SaudiArabia. The study developed robust models ( R2=0.93+) capableof forecasting 11 health indicators up to 10 years into the future.By employing these "what-if" scenarios and forecasting models,policymakers and healthcare practitioners can make informed deci-sions and effectively implement targeted interventions to addresshealth-related challenges.CCS CONCEPTS•Computing methodologies →Modeling methodologies ;•Applied computing →Health informatics ;•Informationsystems→Clustering ;Information systems applications .KEYWORDSClustering, forecasting, health econometrics, data scienceACM Reference Format:Atika Rahman Paddo, Sadia Afreen, and Saptarshi Purkayastha. 2023. Hier-archical Clustering and Multivariate Forecasting for Health Econometrics.InProceedings of epiDAMIK @ SIGKDD Workshop. ACM, New York, NY,USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXXPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] @ SIGKDD Workshop, 2023©2023 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONHealth econometrics is a multidisciplinary field that combines eco-nomics and statistics to study various aspects of healthcare systems,policies, and outcomes. Traditionally, econometric methods havebeen employed to analyze healthcare data, including regressionmodels, panel data analysis, and instrumental variable techniques[20, 7]. However, there is a growing recognition of the potentialbenefits of incorporating these advanced techniques into healtheconometrics research.In today’s interconnected society, understanding the factors thataffect health outcomes is crucial for effective policymaking andhealthcare treatments. With the availability of extensive healthdata, advanced analysis methods can provide valuable insights tosupport evidence-based decision-making. The World Bank’s HealthStatistics collection offers a wealth of data on various health in-dices across nations [26]. In this study, we aim to develop a betterunderstanding of the predefined Gulf Cooperation Council (GCC)countries, which share similar economies and development goals[15]. By utilizing a clustering algorithm, we have identified simi-larities in their health statistics [34]. However, this study does notinclude one of the GCC countries, the United Arab Emirates (UAE).Katoue et al. argued that the health issues faced in the MiddleEast and North Africa regions must be highlighted, as these coun-tries still face challenges in providing equitable and high-qualityhealthcare services. Limited literature supports evidence of im-provements in these areas [13]. To address the health challengesin the GCC countries, including Bahrain, Kuwait, Oman, Qatar,Saudi Arabia, and the UAE, innovative strategies are necessary toimprove the overall health status of the Middle Eastern countries[15, 19]. A United Nations report highlights disparities and com-monalities in health factors among different regions in the Arabworld [31]. While the report suggests that the GCC countries havemade progress in maintaining sanitation and safe drinking water,it is unclear whether all countries in the region will continue withthe same policies in the future [31].This study aims to identify any disparities between countries re-garding uniform healthcare provision. The 2015 World Bank reportemphasizes the impact of health outcomes on health policies andexpenditure in the GCC countries [28]. Changes in health outcomes,such as non-communicable diseases and life expectancy, coupledwith inflation, may create disparities in health expenditure amongthese countries [2].It remains uncertain which countries can improve overall health-care and which may lag behind in developing uniform health poli-cies [8]. Additionally, our research study focuses on populationwell-being, particularly in different age groups, and factors suchepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthaas expenditure, immunization, and survival rates. Understandingthe association between age and other health factors is crucialfor targeting "age-specific" policies in healthcare management anddisease prevention [9]. This is significant in terms of healthcaremanagement and disease prevention.This research paper combines cluster analysis, feature impor-tance analysis, and multivariate time series modeling to uncover theunderlying factors influencing health outcomes within a selectedcluster comprising five GCC countries: Bahrain, Kuwait, Oman,Qatar, and Saudi Arabia. The findings contribute to a deeper under-standing of the complex dynamics of health indicators and provideactionable insights for policymakers and healthcare professionals.2 RELATED WORKSBalçik et al. [5] conducted a study on clustering algorithms thatis similar to ours. They focused on the hierarchical clustering ofEuropean Union countries based on preselected features to analyzehealthcare development. Their clustering results were evaluatedusing statistical differences between indicator values. Similarly,Raheem et al. [29] approached their objective using the silhouettescore, providing a clearer context for distinguishing clusters. Whileboth approaches seemed reasonable, we opted to use the silhouettescore in our study to understand the distinctiveness of our clusters,which yielded high accuracy in identifying cluster formation.Several studies have been conducted on a national level usingclustering approaches to determine differences in health indicatorsand gain insights into various countries. Proksch et al. [27] analyzedthe clustering of 30 OECD countries to identify the varying aspectsof health that differentiate these clusters. Muldoon et al. [23] andLefèvre et al. [17] explored similarities among countries and theircontributions to health factors. The former focused on mortalitysignificance, while the latter employed a multivariate clusteringapproach to identify patterns in population and healthcare systems.In contrast to these studies, our research includes a forecastingapproach, which provides predictive conclusions for policymakers,analysts, and health practitioners.Levantesi et al. [18] also utilized a multivariate forecasting ap-proach to develop a predictive understanding of healthcare, albeitnot aligned with the Prophet model. Khan & Noor [14] explored theapplication of the Prophet time series approach to visualize futurehealth outcomes, but their study employed a univariate Prophet ap-proach. In our study, we employed a multivariate Prophet approach,which offered a unique perspective by determining the relationshipbetween changes in one indicator and another more accurately.Ahmed et al. [1] and Ampofo & Boateng [4] also adopted interest-ing approaches using multivariate Prophet, focusing specifically oncardiovascular and diabetes health sectors, respectively.Therefore, our research aims to establish a comprehensive as-sociation among predicted population well-being, which can beutilized to advance our understanding of healthcare outcomes.3 METHODOLOGYThe methodology utilized in this research paper followed a se-quential process to analyze health data. Firstly, the data underwentpreprocessing. Next, a dendrogram was constructed using the Wardmethod to identify clusters. A threshold was applied using the ’fclus-ter’ function to determine the number of clusters. Afterward, theimportant features for each cluster were identified using a thresholdof 0.615. We employed the multivariate Prophet method for time se-ries forecasting and predicting future trends. Finally, statistical testswere conducted on the features to identify significant differencesin the upcoming years.3.1 Data CollectionWe obtained the Health Statistics and Nutrition dataset from TheWorld Bank, which offers comprehensive health indicators for vari-ous countries from 1960 to 2021.3.2 Data Preprocessing3.2.1 Data Cleaning. Initially, the original dataset contained in-formation for 266 countries/regions and 255 indicators. To focuson a specific midway time shot, we selected data from 2000. Weexcluded regional aggregations from the dataset (EU, AFRO, etc.)and countries with significant missing values for most indicators(e.g., United Arab Emirates, Aruba, Afghanistan, Poland, Barbados,Guinea). Additionally, we removed indicators with extensive nullvalues across countries. Any remaining null values for a countrywere imputed using the median of that column. After cleaning, thedataset comprised 134 countries and 128 variables.3.2.2 Data Scaling using Min-Max Scaler. To ensure consistencyand prevent any single feature from dominating the analysis, wescaled the data using the Min-Max Scaler [6]. This scaling techniquetransformed the data to a predefined range of 0 to 1 by subtract-ing the minimum value and dividing by the range. This processnormalized the data within the [0, 1] range.3.3 Clustering3.3.1 Linkage Matrix. Next, we computed the linkage matrix usingthe linkage function from the scipy.cluster.hierarchy module. Thelinkage matrix represents the hierarchical clustering structure ofthe data based on pairwise distance calculations.3.3.2 Creating a Dendrogram using Ward’s Method. We employedWard’s method to construct a dendrogram, which visually displaysthe hierarchical relationships among the data points [24]. Ward’smethod minimizes the total within-cluster variance at each step ofdendrogram creation. The resulting dendrogram exhibited hierar-chical clustering patterns from a distance scale of 0 to 27, aiding inunderstanding the grouping patterns within the data (see Fig. 1).3.3.3 Determining the Number of Clusters using fcluster. The num-ber of clusters was determined by assigning data points to clustersbased on a given threshold using the fcluster function. A thresholdvalue of 5 was chosen to define the clusters within the dataset. Thefcluster function, with the specified threshold, provided the clusterassignments for each data point. The above threshold resulted in 9clusters.3.3.4 Evaluation Metrics for Each Cluster: To assess the qualityof the clustering results and evaluate the fit of each data point toits assigned cluster, we calculated the Silhouette score for eachcluster. The Silhouette score measures both the cohesion withinHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023Figure 1: Linkage matrix of nine clusters for the countries in a dendrogrameach cluster and the separation between clusters [32, 25]. The scorewas calculated using equation 1.Silhoutte =Í bi−aimax(ai,bi)n(1)where,aiis the average distance between each sample for i=1,2,3,...n and all other points in its cluster. For each other cluster inthe dataset, the average distance between the sample and all pointsin that cluster is noted and the minimum of these distances is b.nis the total number of samples. To calculate per cluster Silhouettescore,arepresents the average distance between the data pointand other data points within the same cluster and brepresents theaverage distance between the data point and the data points in thenearest neighboring cluster.The Silhouette score ranges from -1 to 1, with a higher scoreindicating better clustering results. A score close to 1 signifies well-separated clusters, while a score close to -1 suggests overlapping orincorrectly assigned clusters. The average silhouette score of all datapoints within the cluster was calculated to obtain the silhouettescore for each cluster. Based on the silhouette score and moreattainable count of the cluster, cluster-8 was chosen for furtheranalysis of time series forecasting.3.3.5 Using hierarchical clustering over other clustering methods:We chose hierarchical clustering using Ward’s method for our anal-ysis of the health statistics and nutrition dataset. Hierarchical clus-tering allows us to explore the data in a hierarchical structure,capturing both global and local patterns of similarity. It is well-suited for datasets with arbitrary cluster shapes and sizes, makingit suitable for analyzing health indicators across countries.3.4 Feature SelectionFollowing the clustering of the countries, our focus shifted to pin-pointing the most crucial characteristics. We accomplished this byimplementing the sklearn library to perform feature selection. Weevaluated 26 key features within the selected cluster, which rankedwithin the top percentile (Table 1).3.4.1 Feature Importance Analysis for Each Cluster. Centroids, orrepresentative data points for each cluster, were determined byaveraging the scaled data. The significance of each feature wasascertained by arranging the feature values in descending order.A threshold of 0.815 yielded fewer features and did not provide acomprehensive outlook for health predictions. As a result, we optedfor a threshold of 0.615, which allowed us to conduct a time seriesforecast with a broader feature set.3.5 Statistical TestsOur reference timeframe was set to the year 2000 for initiating thetime series forecast, and we examined the data for each indicatorwithin the clustered countries. The Kruskal Wallis non-parametrictest served as an effective method for determining value signifi-cance [36]. We utilized this test to discern statistically significantdiscrepancies among the indicators’ values across different coun-tries. After projecting the values for the next decade (2022-2031), werepeated the statistical test on these forecasted values to highlightsignificant differences between countries.3.6 Time-Series Forecasting3.6.1 Data Processing for Time-Series Analysis. Several factors wereconsidered when preparing this data for modeling.Selection of Time Frame: To forecast future health statistics forthe clustered countries, we opted for the most recent data to trainthe multivariate Prophet model. Our dataset encompassed healthdata from 1960 to 2021, but for our purposes, we narrowed thetimeframe to 2000 to 2021. This eliminated the need for imputingdata from distant years.Reduction of Features: The initial feature importance analysisidentified 26 features for the study. However, two features (Cause ofdeath, by non-communicable diseases (% of total) and Internationalmigrant stock (% of population)) had a high percentage of missingvalues across all clustered countries, accounting for up to 81.82%of total data. That is why excluded these indicators and kept 24features.Imputation of Time-series Data: We identified missing valueswithin our set of 26 features, necessitating imputation for a completetime-series dataset. We used Naïve forecasting to fill in the missingdata for the years from 2000 to 2021. If a specific year’s data wasmissing for a particular country’s indicator, we filled the gap usingepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 1: FEATURE IMPORTANCE FOR CLUSTERIndicator Name Indicator CodeFeatureImportanceValue1 People using at least basic sanitation services (% of population)§SH.STA.BASS.ZS 0.97432 Immunization, measles (% of children ages 12-23 months) SH.IMM.MEAS 0.96063 People using at least basic drinking water services (% of population) SH.H2O.BASW.ZS 0.95854 Immunization, DPT (% of children ages 12-23 months)§SH.IMM.IDPT 0.92575 Survival to age 65, male (% of cohort) SP.DYN.TO65.MA.ZS 0.87536 Survival to age 65, female (% of cohort)†SP.DYN.TO65.FE.ZS 0.87527 Population ages 25-29, male (% of male population) SP.POP.2529.MA.5Y 0.85838 Population ages 20-24, female (% of female population) SP.POP.2024.FE.5Y 0.84379 Life expectancy at birth, total (years)†SP.DYN.LE00.IN 0.822710 Population ages 25-29, female (% of female population)†SP.POP.2529.FE.5Y 0.821611 Life expectancy at birth, female (years)†SP.DYN.LE00.FE.IN 0.795412 Population ages 30-34, male (% of male population) SP.POP.3034.MA.5Y 0.765113 Cause of death, by non-communicable diseases (% of total)∗SH.DTH.NCOM.ZS 0.756714 Population ages 20-24, male (% of male population)§SP.POP.2024.MA.5Y 0.752715 Population ages 30-34, female (% of female population) SP.POP.3034.FE.5Y 0.731816 Population ages 15-64, male (% of male population) SP.POP.1564.MA.ZS 0.72217 Population ages 15-64 (% of total population)†SP.POP.1564.TO.ZS 0.707718Domestic general government health expenditure(% of current health expenditure)SH.XPD.GHED.CH.ZS 0.700719 Population ages 35-39, male (% of male population) SP.POP.3539.MA.5Y 0.691420 Population growth (annual %)¶SP.POP.GROW 0.68921 International migrant stock (% of population)∗SM.POP.TOTL.ZS 0.684222 Population ages 05-09, female (% of female population) SP.POP.0509.FE.5Y 0.673423 Population ages 10-14, female (% of female population)†SP.POP.1014.FE.5Y 0.668624 Population ages 0-14, female (% of female population)†SP.POP.0014.FE.ZS 0.661525 Population, male (% of total population)†SP.POP.TOTL.MA.ZS 0.659526 Population ages 15-19, female (% of female population)†SP.POP.1519.FE.5Y 0.6388∗Removed because of having 81.82% values as missing from the year 2000 to 2021.†Removed because of having highly correlation with other important feature(s) which were in higher rank according to feature importance.§Removed for poor predictions from the univariate Prophet model and were not used in multivariate model training.¶Removed because of having negative values in some years, thus log transform scaling could not be done, thus removed in the forecasting.the preceding year’s data for that same indicator. This resulted in acomplete time-series dataset with 24 features for five countries.Logarithmic Scaling on Time-series Data: Prior to forecasting,we performed a logarithmic transformation for data scaling andreverted to the original values for performance measurement. Al-though the MinMax Scaling algorithm was used initially, we choselogarithmic scaling for the time series forecast. This decision wasbased on the lower error rate found with logarithmic scaling whenreturning to the original data [20].3.6.2 Prophet Forecasting Model to Predict Indicator Values. Ourapproach to predicting yearly indicators’ values for the clusteredcountries and important features involved using multivariate mod-eling in Prophet. This is what enables "what-if" analysis for forecast-ing health indicators. If we simulate or forecast individual predictorindicators and guide policy, we can see the effects of those simula-tions on our final multivariate model. This is crucial to understandhow these indicators’ forecasts varied per country and whether theProphet model’s results were consistent for all clustered countries.Univariate Prophet Model. The univariate Prophet model focuseson forecasting a single time series taking into account the historicalvalues of the target variable and identifies patterns and trends tomake future predictions. The model captures seasonality ( s(t)),trend (g(t)), holiday effects ( h(t)) (if any) and error ( ε(t)) usingadditive regression components.y(t)=g(t)+s(t)+h(t)+ε(t) (2)In our work, we have used a Univariate Prophet model to forecastthe predictor values for the future. However, if existing econometricmodels of varied types are more suited for a particular indicator,then those can also be used. The univariate model for each predictorbuilt the future dataframe for the years 2022 to 2031 (10 years).Multivariate Prophet Model. The multivariate Prophet model ex-tends the univariate model by incorporating additional exogenousvariables or features as regressors that can influence the target vari-able. These additional exogenous variables ( f1(t),f2(t),...,f n(t))can be other time series data or external factors such as economicindicators. In this work, we have incorporated other indicators inthe health statistics data as regressors to predict specific indicatorsone by one. By including these variables, the model can capturetheir impact on the target variable and improve the accuracy ofpredictions.y(t)=g(t)+s(t)+h(t)+f1(t)+f2(t)+...+fn(t)+ε(t)(3)By incorporating relevant external factors, the multivariate modelcan capture additional information and dependencies that impactthe target variable. This can lead to more accurate and reliable pre-dictions. Including additional variables provides insights into theHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023factors driving the target variable’s behavior. It enables a better un-derstanding of the system’s relationships and dependencies amongdifferent variables. This also allows for customization based on thespecific requirements of the forecasting problem. But to incorpo-rate multivariate forecasting, we also found additional complexity,such as complex data preprocessing, feature selection, and potentialcorrelation considerations.The code to replicate this study can be found at:https://github.com/iupui-soic/WB-cluster-forecast.4 RESULTS4.1 ClusteringWith a distance threshold set at 5, our cluster dendrogram (Fig. 1)presented nine (9) visually distinct clusters.The Silhouette score, a measure used to evaluate the clusters andthe countries within the nine clusters, is displayed in Table 2.Table 2: CLUSTERED COUNTRIES AND EVALUATION MET-RICCluster #ClusterSilhouetteScoreCountries1(European Countries)0.2914Bulgaria, Belarus, Czechia, Estonia,Croatia, Hungary, Lithuania,Latvia, Slovenia, Ukraine2(European,North American,Oceanian Countriesand Japan)0.4851Australia, Austria, Belgium, Canada,Switzerland, Germany, Denmark, Spain,Finland, France, United Kingdom, Greece,Ireland, Iceland, Italy, Japan, Luxembourg,Netherlands, Norway, New Zealand,Portugal, Sweden, United States3(East & West African,South Asian andOther Countries)0.4227Benin, Bangladesh, Congo, Comoros,Eritrea, Ghana, Gambia, Haiti,Cambodia, Madagascar, Mauritania,Nepal, Pakistan, Senegal, Togo, Yemen4(Southern AfricanCountries)0.2484 Botswana, Lesotho, Namibia, Eswatini5(African Countries)0.3309Burundi, Burkina Faso, Cameroon,Ethiopia, Kenya, Liberia, Mali,Mozambique, Malawi, Niger, Nigeria,Rwanda, Sierra Leone, Chad, Tanzania,Uganda, Zambia6(Ensemble ofCountries fromDifferent Regions)0.6693Albania, Argentina, Armenia, Bahamas,Bosnia and Herzegovina, Brazil, Barbados,Chile, Colombia, Costa Rica, Cuba, Cyprus,Georgia, Israel, Jamaica, Kazakhstan,Sri Lanka, Moldova, Malta, Mauritius,Panama, Singapore, Seychelles, Thailand,Uruguay7(Large EconomyCountries in Asia)0.5667 China, India8(Middle EasternCountries)0.6597 Bahrain, Kuwait, Oman, Qatar, Saudi Arabia9(Ensemble ofCountries fromDifferent Regions)0.3282Azerbaijan, Belize, Bolivia, Algeria, Ecuador,Egypt, Fiji, Guatemala, Guyana, Indonesia,Iran, Jordan, Kyrgyz Republic, Kiribati,Lebanon, Morocco, Maldives, Mexico,Myanmar, Mongolia, Malaysia, Peru,Philippines, Paraguay, Solomon Islands,El Salvador, Turkmenistan, Tonga,Tunisia, Uzbekistan, Vietnam, VanuatuFigure 2: Time-series Yearly Data and Future Forecasts for Qatarusing Univariate Prophet ModelFigure 3: Time-series Yearly Data and Future Forecasts for Qatarusing Multivariate Prophet Model4.2 Feature RelevanceWe analyzed correlations between the features. If an indicatordemonstrated a strong positive or negative correlation with anyother indicators in the dataset, we excluded it. We retained onlythose indicators that didn’t correlate highly with others. This pro-cess yielded 15 indicators out of the original 26 in Cluster-8 shownin Table 1.4.3 Time-Series ForecastingOur secondary objective was to apply a multivariate time seriesforecasting Prophet model to the significant indicators of the fivecountries within a cluster [35]. A preliminary statistical test high-lighted similarities in the indicators’ values for the year 2000.4.3.1 Outcome of Feature Reduction. Due to many missing values,we excluded two features identified through feature importance.We also removed nine indicators that exhibited a high correlationwith other significant features and one indicator that displayed neg-ative values, which was unsuitable for logarithmic transformation.Consequently, we proceeded with univariate forecasting for theremaining 14 indicators.epiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 3: ACCURACY METRICS FOR THE FORECASTED INDICATOR VALUES AMONG THE COUNTRIESIndicatorsRMSE MAPE R2Adjusted R2Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Population ages 30-34, male 0.0001 ±0.0001 0.5941±0.3203 0±0 0.0401±0.0259 1±0 0.5997±0.3132 1±0 0.5497±0.3523Population ages 30-34, female 0.0001 ±0.0001 0.2563±0.0961 0±0 0.0216±0.0109 1±0 0.6592±0.468 1±0 0.6166±0.5265Population ages 35-39, male 0.0002 ±0.0002 0.3445±0.1631 0±0 0.0259±0.0093 1±0 0.6581±0.2856 1±0 0.6154±0.3213Population ages 25-29, male 0.0059 ±0.0127 1.1566±0.7031 0.0006±0.0013 0.071±0.0374 1±0 0.6031±0.252 1±0.0001 0.5535±0.2835Population ages 20-24, female 0.0287 ±0.0637 0.4546±0.3414 0.0032±0.0072 0.0421±0.0381 0.9979±0.0046 0.5067±0.3441 0.9956±0.0097 0.445±0.3871Population ages 15-64, male 0.001 ±0.0012 1.2822±0.8086 0±0 0.0143±0.0092 1±0 0.4109±0.6021 1±0 0.3372±0.6774Population ages 05-09, female 0.0001 ±0.0001 0.5177±0.1904 0±0 0.0458±0.0212 1±0 0.0855±1.1177 1±0 -0.0288±1.2574Survival to age 65, male 0.001 ±0.0007 1.1749±0.7324 0±0 0.0125±0.0089 1±0 0.5497±0.6035 1±0 0.4935±0.6789Domestic general governmenthealth expenditure0.4999±0.498 2.3871±1.0832 0.0058±0.0059 0.0282±0.015 0.9681±0.0409 0.4775±0.2928 0.933±0.0859 0.4122±0.3294Immunization, measles 0.0009 ±0.0008 1.0328±0.6968 0±0 0.0086±0.0054 1±0 0.2123±0.2276 1±0 0.1139±0.256People using at least basicdrinking water services0.0008±0.0006 0.2138±0.3582 0±0 0.002±0.0035 0.7997±0.4471 0.5849±0.3723 0.5794±0.9388 0.533±0.41894.3.2 Statistical Testing on the Existing Indicator Values. We per-formed the Kruskal Wallis test on the values of the 15 indicators forthe countries within the clusters. The resulting p-values were allgreater than 0.05, suggesting no statistically significant differencesamong the values of the indicators within the clustered countries.Since these indicators demonstrated similar values across countries,we continued with time series forecasting.4.3.3 Univariate & Multivariate Prophet.Future Dataframe. Univariate Prophet modeling produced reli-able predictions for most indicators, yielding low RMSE & MAPEand betterR2value. However, three indicators demonstrated infe-riorR2values compared to others, leading us to exclude them fromthe multivariate models. These indicators were: Population ages20-24, male (% of male population), Immunization, DPT (% of chil-dren ages 12-23 months), and People using at least basic sanitationservices (% of the population).Future Forecasts. The multivariate Prophet model generated fore-casts for each of the 11 indicators under consideration. In eachforecast, the multivariate model included 10 additional regressorscorresponding to the other 10 indicators, serving as predictorsexcluding the target indicator. The accuracy metrics for the multi-variate models are detailed in Table 3. The univariate forecastingmodel predicted 15 indicators for a sample country (Qatar), andthe multivariate model predicted 11 indicators (see Fig. 2 and Fig.3 respectively). These figures illustrate the multivariate Prophetmodel’s superior forecasting performance. The combined forecastsfor the clustered countries (Bahrain, Kuwait, Oman, Qatar, andSaudi Arabia from the year 2000 to 2031 for all 11 indicators areillustrated in Fig.4 with continuous error bar plots. The differencesin the indicators in the future years can be seen in Fig. 44.3.4 Statistical Analysis on the Forecasting. The future forecastedindicator values also showed statistically significant differences(p<0.05) among the countries, highlighting that the forecasted tra-jectory of the countries might be changing in the future based on thealready changing nature of predictors. Using univariate forecasting,such modeling would not have been possible.5 DISCUSSIONHealth econometrics analyses have traditionally relied on cross-country surveys like the National Family Health Survey (NFHS) andthe Demographic Health Survey (DHS). They often employ logisticregression and other statistical techniques for comparing countries[20, 33]. Among unsupervised statistical approaches, I-distance[11, 12] has been utilized for ranking purposes, including coun-tries based on health indicators. However, our study presents thepotential of enhanced clustering machine learning techniques formanaging multiple related variables, particularly for large datasets.[21].Notably, certain clusters, such as Cluster-4 and Cluster-8, displaygeographical and cultural similarities. The cluster linkage cutoffwould need to be significantly lowered to establish more readilyapparent similarities within each cluster. However, this could leadto fewer predictor indicators, affecting our features of importance.If we expand the indicators used in feature selection, we risk com-plicating the model and reducing its interpretability. [16].Other clustering algorithms, especially spectral clustering, whilea powerful technique in certain cases, may not always be the mostappropriate choice. It operates based on graph theory principlesand requires constructing a similarity matrix and computing eigen-vectors, which can be computationally expensive and memory-intensive for larger datasets. Spectral clustering also contains astochastic factor which was avoided by using hierarchical cluster-ingGiven the size and nature of our dataset, hierarchical clusteringwith Ward’s method proved to be a more scalable and efficient op-tion. It aligns well with our goals of exploring hierarchical patternsand capturing diverse cluster shapes in the health and nutritiondataset. Hierarchical clustering also provided meaningful insightsinto the health indicators across countries. Along with this, Loga-rithmic scaling on the dataset provided less mean squared error ona whole in the prediction of the future features’ values comparedto Min-Max scaling.While our models present robust and meaningful findings, theyalso highlight some challenges that need to be considered in futurestudies. A critical point is the trade-off between the granularity ofclustering and the complexity of multivariate models. While deeperclustering might yield more nuanced insights, it can also reduceHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023the number of predictor indicators and increase model complexity.It calls for a balanced approach to ensure the interpretability andpractical utility of the models.Additionally, our multivariate forecasting model is predicatedon current and past trends. The dynamic nature of health indi-cators and their susceptibility to various external factors such aspolitical changes, economic fluctuations, or global health crises,might alter these trends significantly. Future research must con-sider these potential disruptions and explore methods to accountfor such unpredictability.Further, we could determine certain associations by understand-ing the identification of statistical differences amongst featuresthat we obtained after analysis and predictions from a multivariatemodel. Viewing Fig.4i, where Qatar’s future prediction on healthexpenditure seems to decline, and Fig.4j also indicates a declinein immunization. Similar declines are seen in female populationages who are potentially at a maternal period (Fig.4c and 4e). Wedrew validating conclusions that our multivariate prophet modeldetermines the reliance of a feature on another feature for a country[22]. This can aid the several health assessment research associatedwith various indicators such as work by Amoatey et al. [3].Recognizing these trends and connections could guide policy-makers or health practitioners toward effective strategies for im-proving overall health outcomes. Moreover, our predictions con-sider various population age groups, offering a comprehensiveperspective on health prospects [9]. Our study’s application of mul-tivariate forecasting allowed us to predict future health outcomesbased on current trends and patterns. This model has allowed us toproject possible trajectories for various health indicators in the Mid-dle Eastern countries cluster, aiding in long-term strategic healthplanning for the region. The associations identified between differ-ent features underline the interconnectedness of health outcomes,signaling the necessity for an integrated approach to healthcarepolicy.5.1 LimitationsThis study has its limitations. Although we selected 26 indicatorsfrom the World Bank dataset’s total of 128, not all could be incorpo-rated into our multivariate prediction model. For example, the Popu-lation Growth indicator was excluded because it contained negativevalues incompatible with logarithmic transformation. However, our(a)Population ages 35-39, male (% of malepopulation)(b)Population ages 30-34, male (% of malepopulation)(c)Population ages 30-34, female (% of femalepopulation)(d)Population ages 25-29, male (% of malepopulation)(e)Population ages 20-24, female (% of fe-male population)(f)Population ages 15-64, male (% of malepopulation)(g)Population ages 05-09, female (% of fe-male population) (h)Survival to age 65, male (% of cohort)(i)Domestic general government health ex-penditure (% of current health expenditure)(j)Immunization, measles (% of children ages12-23 months)(k)People using at least basic drinking waterservices (% of population)Figure 4: Forecasts of each indicator for five clustered countries∗∗Blue forecast lines are for Bahrain; Orange forecast lines are for Kuwait, Green forecast lines are for Oman, Red forecast lines are for Qatar, and Purple forecast lines are for SaudiArabiaepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthamodel’s predictions could be significantly influenced by the inclu-sion of this indicator. Similarly, other omitted indicators could haveoffered additional insights into overall health outcomes.5.2 Future WorkFuture work could involve constructing a more informative modelwith an expanded set of features or a larger cluster of countries.Techniques like Neural Prophet [37], DeepAR [30], or even simplermodels like Random Forest Regressor [10] could be explored. Alter-native approaches to constructing future data frames, such as AutoARIMA, could yield more reliable results.6 CONCLUSIONIn conclusion, our study has identified key factors influencing healthoutcomes in selected Gulf Cooperation Council (GCC) countries(Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia). We highlightedthe importance of population wellness and age-specific strategiesin healthcare management and disease prevention. Our method in-volved data preprocessing, clustering using Ward’s method, featureselection, and time series forecasting with multivariate Prophet.This research provides a comprehensive approach to health dataanalysis, identifying crucial health outcome influencers, and deliv-ering actionable insights for policymakers and healthcare profes-sionals using machine learning and forecasting techniques.
BrMpa2ZuaxJ
Hierarchical Clustering and Multivariate Forecasting for Health Econometrics- Review
4: Good paper, accept
**Summary:** This study uses clustering and time series forecasting to create retrospective scenarios for forecasting the long-term impact of changes in socio-economic indicators on health indicators. Firstly, the authors used hierarchal clustering to group countries based on socio-economic indicators based on the World Bank Health Statistics and Nutrition dataset. Following that, the authors performed time series analysis to predict the values of the different indicators using the multivariate prophet model on the countries which appear in one of the groups. This led to valuable insights about the dynamics in the future. **Strong Points:** - The clusters constructed are interesting. Cluster 1 seem to be a whole group of Eastern European Countries which are geographical neighbors. Cluster 3 countries are not geographically close but are developing nations. Cluster 4 & 5 seem to be African countries and so on... - The authors perform a thorough literature review which provides a good platform to evaluate the significance of this study. - This is a well written paper. The explanations provided are good, the figures are well made and it surely applies a variety of methods. This can surely add to the technical contributions of this work. **Weak Points:** - Why hierarchal clustering? Spectral clustering is also a good method, right? The authors need to mention their motivation behind using hierarchal clustering. - Retrospective Interpretations of clusters are needed as the relation in some of them are not that obvious. For example, what is the relation between the countries that appear in cluster 2? It's not that clear. - In Figure 4, some of the indicator forecasts have a high level of uncertainty more than the others. Exploring what is causing this is extremely valuable but is sadly missing. **Suggestions:** - I understand that logarithmic scaling gave better performance. However, one of the lim imitations mentioned of forecasting the Population Growth indicator could be easily done by using Min-Max scaling. By any means, the performance could have been reported. - What is the significance of a threshold of 0.815 in section 3.4? Was it used in prior works?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
Unyf3QsNmx
KDD.org/2023/Workshop/epiDAMIK
2023
Hierarchical Clustering and Multivariate Forecasting for Health Econometrics
["Atika Rahman Paddo", "Sadia Afreen", "Saptarshi Purkayastha"]
Data science approaches in Health Econometrics and Public Health research are limited, with a lack of exploration of state-of-the-art computational methods. Recent studies have shown that neural networks and machine learning methods outperform traditional statistical methods in forecasting and time-series analysis. In this study, we demonstrate the use of unsupervised and supervised machine learning approaches to create "what-if" scenarios for forecasting the long-term impact of changes in socio-economic indicators on health indicators. These indicators include basic sanitation services, immunization, population ages, life expectancy, and domestic health expenditure. To begin, we utilized Hierarchical Cluster Analysis to group 131 countries into 9 clusters based on various indicators from the World Bank Health Statistics and Nutrition dataset. This step allowed us to create clusters of countries. In order to showcase the feasibility of our approach, we performed a time series analysis using multivariate prophet on the most significant features from a cluster consisting of Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia. The study developed robust models (𝑅2 = 0.93+) capable of forecasting 11 health indicators up to 10 years into the future. By employing these "what-if" scenarios and forecasting models, policymakers and healthcare practitioners can make informed decisions and effectively implement targeted interventions to address health-related challenges.
["Clustering", "forecasting", "health econometrics", "data science"]
ABSTRACTData science approaches in Health Econometrics and Public Healthresearch are limited, with a lack of exploration of state-of-the-artcomputational methods. Recent studies have shown that neuralnetworks and machine learning methods outperform traditional sta-tistical methods in forecasting and time-series analysis. In this study,we demonstrate the use of unsupervised and supervised machinelearning approaches to create "what-if" scenarios for forecasting thelong-term impact of changes in socio-economic indicators on healthindicators. These indicators include basic sanitation services, im-munization, population ages, life expectancy, and domestic healthexpenditure. To begin, we utilized Hierarchical Cluster Analysisto group 131 countries into 9 clusters based on various indicatorsfrom the World Bank Health Statistics and Nutrition dataset. Thisstep allowed us to create clusters of countries. In order to showcasethe feasibility of our approach, we performed a time series analysisusing multivariate prophet on the most significant features froma cluster consisting of Bahrain, Kuwait, Oman, Qatar, and SaudiArabia. The study developed robust models ( R2=0.93+) capableof forecasting 11 health indicators up to 10 years into the future.By employing these "what-if" scenarios and forecasting models,policymakers and healthcare practitioners can make informed deci-sions and effectively implement targeted interventions to addresshealth-related challenges.CCS CONCEPTS•Computing methodologies →Modeling methodologies ;•Applied computing →Health informatics ;•Informationsystems→Clustering ;Information systems applications .KEYWORDSClustering, forecasting, health econometrics, data scienceACM Reference Format:Atika Rahman Paddo, Sadia Afreen, and Saptarshi Purkayastha. 2023. Hier-archical Clustering and Multivariate Forecasting for Health Econometrics.InProceedings of epiDAMIK @ SIGKDD Workshop. ACM, New York, NY,USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXXPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] @ SIGKDD Workshop, 2023©2023 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONHealth econometrics is a multidisciplinary field that combines eco-nomics and statistics to study various aspects of healthcare systems,policies, and outcomes. Traditionally, econometric methods havebeen employed to analyze healthcare data, including regressionmodels, panel data analysis, and instrumental variable techniques[20, 7]. However, there is a growing recognition of the potentialbenefits of incorporating these advanced techniques into healtheconometrics research.In today’s interconnected society, understanding the factors thataffect health outcomes is crucial for effective policymaking andhealthcare treatments. With the availability of extensive healthdata, advanced analysis methods can provide valuable insights tosupport evidence-based decision-making. The World Bank’s HealthStatistics collection offers a wealth of data on various health in-dices across nations [26]. In this study, we aim to develop a betterunderstanding of the predefined Gulf Cooperation Council (GCC)countries, which share similar economies and development goals[15]. By utilizing a clustering algorithm, we have identified simi-larities in their health statistics [34]. However, this study does notinclude one of the GCC countries, the United Arab Emirates (UAE).Katoue et al. argued that the health issues faced in the MiddleEast and North Africa regions must be highlighted, as these coun-tries still face challenges in providing equitable and high-qualityhealthcare services. Limited literature supports evidence of im-provements in these areas [13]. To address the health challengesin the GCC countries, including Bahrain, Kuwait, Oman, Qatar,Saudi Arabia, and the UAE, innovative strategies are necessary toimprove the overall health status of the Middle Eastern countries[15, 19]. A United Nations report highlights disparities and com-monalities in health factors among different regions in the Arabworld [31]. While the report suggests that the GCC countries havemade progress in maintaining sanitation and safe drinking water,it is unclear whether all countries in the region will continue withthe same policies in the future [31].This study aims to identify any disparities between countries re-garding uniform healthcare provision. The 2015 World Bank reportemphasizes the impact of health outcomes on health policies andexpenditure in the GCC countries [28]. Changes in health outcomes,such as non-communicable diseases and life expectancy, coupledwith inflation, may create disparities in health expenditure amongthese countries [2].It remains uncertain which countries can improve overall health-care and which may lag behind in developing uniform health poli-cies [8]. Additionally, our research study focuses on populationwell-being, particularly in different age groups, and factors suchepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthaas expenditure, immunization, and survival rates. Understandingthe association between age and other health factors is crucialfor targeting "age-specific" policies in healthcare management anddisease prevention [9]. This is significant in terms of healthcaremanagement and disease prevention.This research paper combines cluster analysis, feature impor-tance analysis, and multivariate time series modeling to uncover theunderlying factors influencing health outcomes within a selectedcluster comprising five GCC countries: Bahrain, Kuwait, Oman,Qatar, and Saudi Arabia. The findings contribute to a deeper under-standing of the complex dynamics of health indicators and provideactionable insights for policymakers and healthcare professionals.2 RELATED WORKSBalçik et al. [5] conducted a study on clustering algorithms thatis similar to ours. They focused on the hierarchical clustering ofEuropean Union countries based on preselected features to analyzehealthcare development. Their clustering results were evaluatedusing statistical differences between indicator values. Similarly,Raheem et al. [29] approached their objective using the silhouettescore, providing a clearer context for distinguishing clusters. Whileboth approaches seemed reasonable, we opted to use the silhouettescore in our study to understand the distinctiveness of our clusters,which yielded high accuracy in identifying cluster formation.Several studies have been conducted on a national level usingclustering approaches to determine differences in health indicatorsand gain insights into various countries. Proksch et al. [27] analyzedthe clustering of 30 OECD countries to identify the varying aspectsof health that differentiate these clusters. Muldoon et al. [23] andLefèvre et al. [17] explored similarities among countries and theircontributions to health factors. The former focused on mortalitysignificance, while the latter employed a multivariate clusteringapproach to identify patterns in population and healthcare systems.In contrast to these studies, our research includes a forecastingapproach, which provides predictive conclusions for policymakers,analysts, and health practitioners.Levantesi et al. [18] also utilized a multivariate forecasting ap-proach to develop a predictive understanding of healthcare, albeitnot aligned with the Prophet model. Khan & Noor [14] explored theapplication of the Prophet time series approach to visualize futurehealth outcomes, but their study employed a univariate Prophet ap-proach. In our study, we employed a multivariate Prophet approach,which offered a unique perspective by determining the relationshipbetween changes in one indicator and another more accurately.Ahmed et al. [1] and Ampofo & Boateng [4] also adopted interest-ing approaches using multivariate Prophet, focusing specifically oncardiovascular and diabetes health sectors, respectively.Therefore, our research aims to establish a comprehensive as-sociation among predicted population well-being, which can beutilized to advance our understanding of healthcare outcomes.3 METHODOLOGYThe methodology utilized in this research paper followed a se-quential process to analyze health data. Firstly, the data underwentpreprocessing. Next, a dendrogram was constructed using the Wardmethod to identify clusters. A threshold was applied using the ’fclus-ter’ function to determine the number of clusters. Afterward, theimportant features for each cluster were identified using a thresholdof 0.615. We employed the multivariate Prophet method for time se-ries forecasting and predicting future trends. Finally, statistical testswere conducted on the features to identify significant differencesin the upcoming years.3.1 Data CollectionWe obtained the Health Statistics and Nutrition dataset from TheWorld Bank, which offers comprehensive health indicators for vari-ous countries from 1960 to 2021.3.2 Data Preprocessing3.2.1 Data Cleaning. Initially, the original dataset contained in-formation for 266 countries/regions and 255 indicators. To focuson a specific midway time shot, we selected data from 2000. Weexcluded regional aggregations from the dataset (EU, AFRO, etc.)and countries with significant missing values for most indicators(e.g., United Arab Emirates, Aruba, Afghanistan, Poland, Barbados,Guinea). Additionally, we removed indicators with extensive nullvalues across countries. Any remaining null values for a countrywere imputed using the median of that column. After cleaning, thedataset comprised 134 countries and 128 variables.3.2.2 Data Scaling using Min-Max Scaler. To ensure consistencyand prevent any single feature from dominating the analysis, wescaled the data using the Min-Max Scaler [6]. This scaling techniquetransformed the data to a predefined range of 0 to 1 by subtract-ing the minimum value and dividing by the range. This processnormalized the data within the [0, 1] range.3.3 Clustering3.3.1 Linkage Matrix. Next, we computed the linkage matrix usingthe linkage function from the scipy.cluster.hierarchy module. Thelinkage matrix represents the hierarchical clustering structure ofthe data based on pairwise distance calculations.3.3.2 Creating a Dendrogram using Ward’s Method. We employedWard’s method to construct a dendrogram, which visually displaysthe hierarchical relationships among the data points [24]. Ward’smethod minimizes the total within-cluster variance at each step ofdendrogram creation. The resulting dendrogram exhibited hierar-chical clustering patterns from a distance scale of 0 to 27, aiding inunderstanding the grouping patterns within the data (see Fig. 1).3.3.3 Determining the Number of Clusters using fcluster. The num-ber of clusters was determined by assigning data points to clustersbased on a given threshold using the fcluster function. A thresholdvalue of 5 was chosen to define the clusters within the dataset. Thefcluster function, with the specified threshold, provided the clusterassignments for each data point. The above threshold resulted in 9clusters.3.3.4 Evaluation Metrics for Each Cluster: To assess the qualityof the clustering results and evaluate the fit of each data point toits assigned cluster, we calculated the Silhouette score for eachcluster. The Silhouette score measures both the cohesion withinHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023Figure 1: Linkage matrix of nine clusters for the countries in a dendrogrameach cluster and the separation between clusters [32, 25]. The scorewas calculated using equation 1.Silhoutte =Í bi−aimax(ai,bi)n(1)where,aiis the average distance between each sample for i=1,2,3,...n and all other points in its cluster. For each other cluster inthe dataset, the average distance between the sample and all pointsin that cluster is noted and the minimum of these distances is b.nis the total number of samples. To calculate per cluster Silhouettescore,arepresents the average distance between the data pointand other data points within the same cluster and brepresents theaverage distance between the data point and the data points in thenearest neighboring cluster.The Silhouette score ranges from -1 to 1, with a higher scoreindicating better clustering results. A score close to 1 signifies well-separated clusters, while a score close to -1 suggests overlapping orincorrectly assigned clusters. The average silhouette score of all datapoints within the cluster was calculated to obtain the silhouettescore for each cluster. Based on the silhouette score and moreattainable count of the cluster, cluster-8 was chosen for furtheranalysis of time series forecasting.3.3.5 Using hierarchical clustering over other clustering methods:We chose hierarchical clustering using Ward’s method for our anal-ysis of the health statistics and nutrition dataset. Hierarchical clus-tering allows us to explore the data in a hierarchical structure,capturing both global and local patterns of similarity. It is well-suited for datasets with arbitrary cluster shapes and sizes, makingit suitable for analyzing health indicators across countries.3.4 Feature SelectionFollowing the clustering of the countries, our focus shifted to pin-pointing the most crucial characteristics. We accomplished this byimplementing the sklearn library to perform feature selection. Weevaluated 26 key features within the selected cluster, which rankedwithin the top percentile (Table 1).3.4.1 Feature Importance Analysis for Each Cluster. Centroids, orrepresentative data points for each cluster, were determined byaveraging the scaled data. The significance of each feature wasascertained by arranging the feature values in descending order.A threshold of 0.815 yielded fewer features and did not provide acomprehensive outlook for health predictions. As a result, we optedfor a threshold of 0.615, which allowed us to conduct a time seriesforecast with a broader feature set.3.5 Statistical TestsOur reference timeframe was set to the year 2000 for initiating thetime series forecast, and we examined the data for each indicatorwithin the clustered countries. The Kruskal Wallis non-parametrictest served as an effective method for determining value signifi-cance [36]. We utilized this test to discern statistically significantdiscrepancies among the indicators’ values across different coun-tries. After projecting the values for the next decade (2022-2031), werepeated the statistical test on these forecasted values to highlightsignificant differences between countries.3.6 Time-Series Forecasting3.6.1 Data Processing for Time-Series Analysis. Several factors wereconsidered when preparing this data for modeling.Selection of Time Frame: To forecast future health statistics forthe clustered countries, we opted for the most recent data to trainthe multivariate Prophet model. Our dataset encompassed healthdata from 1960 to 2021, but for our purposes, we narrowed thetimeframe to 2000 to 2021. This eliminated the need for imputingdata from distant years.Reduction of Features: The initial feature importance analysisidentified 26 features for the study. However, two features (Cause ofdeath, by non-communicable diseases (% of total) and Internationalmigrant stock (% of population)) had a high percentage of missingvalues across all clustered countries, accounting for up to 81.82%of total data. That is why excluded these indicators and kept 24features.Imputation of Time-series Data: We identified missing valueswithin our set of 26 features, necessitating imputation for a completetime-series dataset. We used Naïve forecasting to fill in the missingdata for the years from 2000 to 2021. If a specific year’s data wasmissing for a particular country’s indicator, we filled the gap usingepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 1: FEATURE IMPORTANCE FOR CLUSTERIndicator Name Indicator CodeFeatureImportanceValue1 People using at least basic sanitation services (% of population)§SH.STA.BASS.ZS 0.97432 Immunization, measles (% of children ages 12-23 months) SH.IMM.MEAS 0.96063 People using at least basic drinking water services (% of population) SH.H2O.BASW.ZS 0.95854 Immunization, DPT (% of children ages 12-23 months)§SH.IMM.IDPT 0.92575 Survival to age 65, male (% of cohort) SP.DYN.TO65.MA.ZS 0.87536 Survival to age 65, female (% of cohort)†SP.DYN.TO65.FE.ZS 0.87527 Population ages 25-29, male (% of male population) SP.POP.2529.MA.5Y 0.85838 Population ages 20-24, female (% of female population) SP.POP.2024.FE.5Y 0.84379 Life expectancy at birth, total (years)†SP.DYN.LE00.IN 0.822710 Population ages 25-29, female (% of female population)†SP.POP.2529.FE.5Y 0.821611 Life expectancy at birth, female (years)†SP.DYN.LE00.FE.IN 0.795412 Population ages 30-34, male (% of male population) SP.POP.3034.MA.5Y 0.765113 Cause of death, by non-communicable diseases (% of total)∗SH.DTH.NCOM.ZS 0.756714 Population ages 20-24, male (% of male population)§SP.POP.2024.MA.5Y 0.752715 Population ages 30-34, female (% of female population) SP.POP.3034.FE.5Y 0.731816 Population ages 15-64, male (% of male population) SP.POP.1564.MA.ZS 0.72217 Population ages 15-64 (% of total population)†SP.POP.1564.TO.ZS 0.707718Domestic general government health expenditure(% of current health expenditure)SH.XPD.GHED.CH.ZS 0.700719 Population ages 35-39, male (% of male population) SP.POP.3539.MA.5Y 0.691420 Population growth (annual %)¶SP.POP.GROW 0.68921 International migrant stock (% of population)∗SM.POP.TOTL.ZS 0.684222 Population ages 05-09, female (% of female population) SP.POP.0509.FE.5Y 0.673423 Population ages 10-14, female (% of female population)†SP.POP.1014.FE.5Y 0.668624 Population ages 0-14, female (% of female population)†SP.POP.0014.FE.ZS 0.661525 Population, male (% of total population)†SP.POP.TOTL.MA.ZS 0.659526 Population ages 15-19, female (% of female population)†SP.POP.1519.FE.5Y 0.6388∗Removed because of having 81.82% values as missing from the year 2000 to 2021.†Removed because of having highly correlation with other important feature(s) which were in higher rank according to feature importance.§Removed for poor predictions from the univariate Prophet model and were not used in multivariate model training.¶Removed because of having negative values in some years, thus log transform scaling could not be done, thus removed in the forecasting.the preceding year’s data for that same indicator. This resulted in acomplete time-series dataset with 24 features for five countries.Logarithmic Scaling on Time-series Data: Prior to forecasting,we performed a logarithmic transformation for data scaling andreverted to the original values for performance measurement. Al-though the MinMax Scaling algorithm was used initially, we choselogarithmic scaling for the time series forecast. This decision wasbased on the lower error rate found with logarithmic scaling whenreturning to the original data [20].3.6.2 Prophet Forecasting Model to Predict Indicator Values. Ourapproach to predicting yearly indicators’ values for the clusteredcountries and important features involved using multivariate mod-eling in Prophet. This is what enables "what-if" analysis for forecast-ing health indicators. If we simulate or forecast individual predictorindicators and guide policy, we can see the effects of those simula-tions on our final multivariate model. This is crucial to understandhow these indicators’ forecasts varied per country and whether theProphet model’s results were consistent for all clustered countries.Univariate Prophet Model. The univariate Prophet model focuseson forecasting a single time series taking into account the historicalvalues of the target variable and identifies patterns and trends tomake future predictions. The model captures seasonality ( s(t)),trend (g(t)), holiday effects ( h(t)) (if any) and error ( ε(t)) usingadditive regression components.y(t)=g(t)+s(t)+h(t)+ε(t) (2)In our work, we have used a Univariate Prophet model to forecastthe predictor values for the future. However, if existing econometricmodels of varied types are more suited for a particular indicator,then those can also be used. The univariate model for each predictorbuilt the future dataframe for the years 2022 to 2031 (10 years).Multivariate Prophet Model. The multivariate Prophet model ex-tends the univariate model by incorporating additional exogenousvariables or features as regressors that can influence the target vari-able. These additional exogenous variables ( f1(t),f2(t),...,f n(t))can be other time series data or external factors such as economicindicators. In this work, we have incorporated other indicators inthe health statistics data as regressors to predict specific indicatorsone by one. By including these variables, the model can capturetheir impact on the target variable and improve the accuracy ofpredictions.y(t)=g(t)+s(t)+h(t)+f1(t)+f2(t)+...+fn(t)+ε(t)(3)By incorporating relevant external factors, the multivariate modelcan capture additional information and dependencies that impactthe target variable. This can lead to more accurate and reliable pre-dictions. Including additional variables provides insights into theHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023factors driving the target variable’s behavior. It enables a better un-derstanding of the system’s relationships and dependencies amongdifferent variables. This also allows for customization based on thespecific requirements of the forecasting problem. But to incorpo-rate multivariate forecasting, we also found additional complexity,such as complex data preprocessing, feature selection, and potentialcorrelation considerations.The code to replicate this study can be found at:https://github.com/iupui-soic/WB-cluster-forecast.4 RESULTS4.1 ClusteringWith a distance threshold set at 5, our cluster dendrogram (Fig. 1)presented nine (9) visually distinct clusters.The Silhouette score, a measure used to evaluate the clusters andthe countries within the nine clusters, is displayed in Table 2.Table 2: CLUSTERED COUNTRIES AND EVALUATION MET-RICCluster #ClusterSilhouetteScoreCountries1(European Countries)0.2914Bulgaria, Belarus, Czechia, Estonia,Croatia, Hungary, Lithuania,Latvia, Slovenia, Ukraine2(European,North American,Oceanian Countriesand Japan)0.4851Australia, Austria, Belgium, Canada,Switzerland, Germany, Denmark, Spain,Finland, France, United Kingdom, Greece,Ireland, Iceland, Italy, Japan, Luxembourg,Netherlands, Norway, New Zealand,Portugal, Sweden, United States3(East & West African,South Asian andOther Countries)0.4227Benin, Bangladesh, Congo, Comoros,Eritrea, Ghana, Gambia, Haiti,Cambodia, Madagascar, Mauritania,Nepal, Pakistan, Senegal, Togo, Yemen4(Southern AfricanCountries)0.2484 Botswana, Lesotho, Namibia, Eswatini5(African Countries)0.3309Burundi, Burkina Faso, Cameroon,Ethiopia, Kenya, Liberia, Mali,Mozambique, Malawi, Niger, Nigeria,Rwanda, Sierra Leone, Chad, Tanzania,Uganda, Zambia6(Ensemble ofCountries fromDifferent Regions)0.6693Albania, Argentina, Armenia, Bahamas,Bosnia and Herzegovina, Brazil, Barbados,Chile, Colombia, Costa Rica, Cuba, Cyprus,Georgia, Israel, Jamaica, Kazakhstan,Sri Lanka, Moldova, Malta, Mauritius,Panama, Singapore, Seychelles, Thailand,Uruguay7(Large EconomyCountries in Asia)0.5667 China, India8(Middle EasternCountries)0.6597 Bahrain, Kuwait, Oman, Qatar, Saudi Arabia9(Ensemble ofCountries fromDifferent Regions)0.3282Azerbaijan, Belize, Bolivia, Algeria, Ecuador,Egypt, Fiji, Guatemala, Guyana, Indonesia,Iran, Jordan, Kyrgyz Republic, Kiribati,Lebanon, Morocco, Maldives, Mexico,Myanmar, Mongolia, Malaysia, Peru,Philippines, Paraguay, Solomon Islands,El Salvador, Turkmenistan, Tonga,Tunisia, Uzbekistan, Vietnam, VanuatuFigure 2: Time-series Yearly Data and Future Forecasts for Qatarusing Univariate Prophet ModelFigure 3: Time-series Yearly Data and Future Forecasts for Qatarusing Multivariate Prophet Model4.2 Feature RelevanceWe analyzed correlations between the features. If an indicatordemonstrated a strong positive or negative correlation with anyother indicators in the dataset, we excluded it. We retained onlythose indicators that didn’t correlate highly with others. This pro-cess yielded 15 indicators out of the original 26 in Cluster-8 shownin Table 1.4.3 Time-Series ForecastingOur secondary objective was to apply a multivariate time seriesforecasting Prophet model to the significant indicators of the fivecountries within a cluster [35]. A preliminary statistical test high-lighted similarities in the indicators’ values for the year 2000.4.3.1 Outcome of Feature Reduction. Due to many missing values,we excluded two features identified through feature importance.We also removed nine indicators that exhibited a high correlationwith other significant features and one indicator that displayed neg-ative values, which was unsuitable for logarithmic transformation.Consequently, we proceeded with univariate forecasting for theremaining 14 indicators.epiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 3: ACCURACY METRICS FOR THE FORECASTED INDICATOR VALUES AMONG THE COUNTRIESIndicatorsRMSE MAPE R2Adjusted R2Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Population ages 30-34, male 0.0001 ±0.0001 0.5941±0.3203 0±0 0.0401±0.0259 1±0 0.5997±0.3132 1±0 0.5497±0.3523Population ages 30-34, female 0.0001 ±0.0001 0.2563±0.0961 0±0 0.0216±0.0109 1±0 0.6592±0.468 1±0 0.6166±0.5265Population ages 35-39, male 0.0002 ±0.0002 0.3445±0.1631 0±0 0.0259±0.0093 1±0 0.6581±0.2856 1±0 0.6154±0.3213Population ages 25-29, male 0.0059 ±0.0127 1.1566±0.7031 0.0006±0.0013 0.071±0.0374 1±0 0.6031±0.252 1±0.0001 0.5535±0.2835Population ages 20-24, female 0.0287 ±0.0637 0.4546±0.3414 0.0032±0.0072 0.0421±0.0381 0.9979±0.0046 0.5067±0.3441 0.9956±0.0097 0.445±0.3871Population ages 15-64, male 0.001 ±0.0012 1.2822±0.8086 0±0 0.0143±0.0092 1±0 0.4109±0.6021 1±0 0.3372±0.6774Population ages 05-09, female 0.0001 ±0.0001 0.5177±0.1904 0±0 0.0458±0.0212 1±0 0.0855±1.1177 1±0 -0.0288±1.2574Survival to age 65, male 0.001 ±0.0007 1.1749±0.7324 0±0 0.0125±0.0089 1±0 0.5497±0.6035 1±0 0.4935±0.6789Domestic general governmenthealth expenditure0.4999±0.498 2.3871±1.0832 0.0058±0.0059 0.0282±0.015 0.9681±0.0409 0.4775±0.2928 0.933±0.0859 0.4122±0.3294Immunization, measles 0.0009 ±0.0008 1.0328±0.6968 0±0 0.0086±0.0054 1±0 0.2123±0.2276 1±0 0.1139±0.256People using at least basicdrinking water services0.0008±0.0006 0.2138±0.3582 0±0 0.002±0.0035 0.7997±0.4471 0.5849±0.3723 0.5794±0.9388 0.533±0.41894.3.2 Statistical Testing on the Existing Indicator Values. We per-formed the Kruskal Wallis test on the values of the 15 indicators forthe countries within the clusters. The resulting p-values were allgreater than 0.05, suggesting no statistically significant differencesamong the values of the indicators within the clustered countries.Since these indicators demonstrated similar values across countries,we continued with time series forecasting.4.3.3 Univariate & Multivariate Prophet.Future Dataframe. Univariate Prophet modeling produced reli-able predictions for most indicators, yielding low RMSE & MAPEand betterR2value. However, three indicators demonstrated infe-riorR2values compared to others, leading us to exclude them fromthe multivariate models. These indicators were: Population ages20-24, male (% of male population), Immunization, DPT (% of chil-dren ages 12-23 months), and People using at least basic sanitationservices (% of the population).Future Forecasts. The multivariate Prophet model generated fore-casts for each of the 11 indicators under consideration. In eachforecast, the multivariate model included 10 additional regressorscorresponding to the other 10 indicators, serving as predictorsexcluding the target indicator. The accuracy metrics for the multi-variate models are detailed in Table 3. The univariate forecastingmodel predicted 15 indicators for a sample country (Qatar), andthe multivariate model predicted 11 indicators (see Fig. 2 and Fig.3 respectively). These figures illustrate the multivariate Prophetmodel’s superior forecasting performance. The combined forecastsfor the clustered countries (Bahrain, Kuwait, Oman, Qatar, andSaudi Arabia from the year 2000 to 2031 for all 11 indicators areillustrated in Fig.4 with continuous error bar plots. The differencesin the indicators in the future years can be seen in Fig. 44.3.4 Statistical Analysis on the Forecasting. The future forecastedindicator values also showed statistically significant differences(p<0.05) among the countries, highlighting that the forecasted tra-jectory of the countries might be changing in the future based on thealready changing nature of predictors. Using univariate forecasting,such modeling would not have been possible.5 DISCUSSIONHealth econometrics analyses have traditionally relied on cross-country surveys like the National Family Health Survey (NFHS) andthe Demographic Health Survey (DHS). They often employ logisticregression and other statistical techniques for comparing countries[20, 33]. Among unsupervised statistical approaches, I-distance[11, 12] has been utilized for ranking purposes, including coun-tries based on health indicators. However, our study presents thepotential of enhanced clustering machine learning techniques formanaging multiple related variables, particularly for large datasets.[21].Notably, certain clusters, such as Cluster-4 and Cluster-8, displaygeographical and cultural similarities. The cluster linkage cutoffwould need to be significantly lowered to establish more readilyapparent similarities within each cluster. However, this could leadto fewer predictor indicators, affecting our features of importance.If we expand the indicators used in feature selection, we risk com-plicating the model and reducing its interpretability. [16].Other clustering algorithms, especially spectral clustering, whilea powerful technique in certain cases, may not always be the mostappropriate choice. It operates based on graph theory principlesand requires constructing a similarity matrix and computing eigen-vectors, which can be computationally expensive and memory-intensive for larger datasets. Spectral clustering also contains astochastic factor which was avoided by using hierarchical cluster-ingGiven the size and nature of our dataset, hierarchical clusteringwith Ward’s method proved to be a more scalable and efficient op-tion. It aligns well with our goals of exploring hierarchical patternsand capturing diverse cluster shapes in the health and nutritiondataset. Hierarchical clustering also provided meaningful insightsinto the health indicators across countries. Along with this, Loga-rithmic scaling on the dataset provided less mean squared error ona whole in the prediction of the future features’ values comparedto Min-Max scaling.While our models present robust and meaningful findings, theyalso highlight some challenges that need to be considered in futurestudies. A critical point is the trade-off between the granularity ofclustering and the complexity of multivariate models. While deeperclustering might yield more nuanced insights, it can also reduceHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023the number of predictor indicators and increase model complexity.It calls for a balanced approach to ensure the interpretability andpractical utility of the models.Additionally, our multivariate forecasting model is predicatedon current and past trends. The dynamic nature of health indi-cators and their susceptibility to various external factors such aspolitical changes, economic fluctuations, or global health crises,might alter these trends significantly. Future research must con-sider these potential disruptions and explore methods to accountfor such unpredictability.Further, we could determine certain associations by understand-ing the identification of statistical differences amongst featuresthat we obtained after analysis and predictions from a multivariatemodel. Viewing Fig.4i, where Qatar’s future prediction on healthexpenditure seems to decline, and Fig.4j also indicates a declinein immunization. Similar declines are seen in female populationages who are potentially at a maternal period (Fig.4c and 4e). Wedrew validating conclusions that our multivariate prophet modeldetermines the reliance of a feature on another feature for a country[22]. This can aid the several health assessment research associatedwith various indicators such as work by Amoatey et al. [3].Recognizing these trends and connections could guide policy-makers or health practitioners toward effective strategies for im-proving overall health outcomes. Moreover, our predictions con-sider various population age groups, offering a comprehensiveperspective on health prospects [9]. Our study’s application of mul-tivariate forecasting allowed us to predict future health outcomesbased on current trends and patterns. This model has allowed us toproject possible trajectories for various health indicators in the Mid-dle Eastern countries cluster, aiding in long-term strategic healthplanning for the region. The associations identified between differ-ent features underline the interconnectedness of health outcomes,signaling the necessity for an integrated approach to healthcarepolicy.5.1 LimitationsThis study has its limitations. Although we selected 26 indicatorsfrom the World Bank dataset’s total of 128, not all could be incorpo-rated into our multivariate prediction model. For example, the Popu-lation Growth indicator was excluded because it contained negativevalues incompatible with logarithmic transformation. However, our(a)Population ages 35-39, male (% of malepopulation)(b)Population ages 30-34, male (% of malepopulation)(c)Population ages 30-34, female (% of femalepopulation)(d)Population ages 25-29, male (% of malepopulation)(e)Population ages 20-24, female (% of fe-male population)(f)Population ages 15-64, male (% of malepopulation)(g)Population ages 05-09, female (% of fe-male population) (h)Survival to age 65, male (% of cohort)(i)Domestic general government health ex-penditure (% of current health expenditure)(j)Immunization, measles (% of children ages12-23 months)(k)People using at least basic drinking waterservices (% of population)Figure 4: Forecasts of each indicator for five clustered countries∗∗Blue forecast lines are for Bahrain; Orange forecast lines are for Kuwait, Green forecast lines are for Oman, Red forecast lines are for Qatar, and Purple forecast lines are for SaudiArabiaepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthamodel’s predictions could be significantly influenced by the inclu-sion of this indicator. Similarly, other omitted indicators could haveoffered additional insights into overall health outcomes.5.2 Future WorkFuture work could involve constructing a more informative modelwith an expanded set of features or a larger cluster of countries.Techniques like Neural Prophet [37], DeepAR [30], or even simplermodels like Random Forest Regressor [10] could be explored. Alter-native approaches to constructing future data frames, such as AutoARIMA, could yield more reliable results.6 CONCLUSIONIn conclusion, our study has identified key factors influencing healthoutcomes in selected Gulf Cooperation Council (GCC) countries(Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia). We highlightedthe importance of population wellness and age-specific strategiesin healthcare management and disease prevention. Our method in-volved data preprocessing, clustering using Ward’s method, featureselection, and time series forecasting with multivariate Prophet.This research provides a comprehensive approach to health dataanalysis, identifying crucial health outcome influencers, and deliv-ering actionable insights for policymakers and healthcare profes-sionals using machine learning and forecasting techniques.
irZpjz7H3fF
Interesting analysis
4: Good paper, accept
## Clarity This paper and proposed method is easy to follow. ## Quality The analysis is well-motivated and fully delivered the idea. ## Originality This is original work with interesting problem. ## Significance The work is significant. ## Pros: - Well-written, and clearly delivers the ideas, proposed method, and results. - The analysis is interesting to me. - The authors are well aware of the limitations of the proposed method. ## Cons: - The way authors get feature importance is not clear. - Authors may consider using different methods for multivariate time-series forecasting such as MLP, LSTM, … - Authors did not include similar analysis for the univariate case to highlight the benefit of the multivariate model, although authors remove some features based on the performance of univariate models. - The variance for future forecasting results are high, then the conclusion is a bit uncertain (besides the mentioned factors like political changes, economic fluctuations, …)
3: The reviewer is fairly confident that the evaluation is correct
Unyf3QsNmx
KDD.org/2023/Workshop/epiDAMIK
2023
Hierarchical Clustering and Multivariate Forecasting for Health Econometrics
["Atika Rahman Paddo", "Sadia Afreen", "Saptarshi Purkayastha"]
Data science approaches in Health Econometrics and Public Health research are limited, with a lack of exploration of state-of-the-art computational methods. Recent studies have shown that neural networks and machine learning methods outperform traditional statistical methods in forecasting and time-series analysis. In this study, we demonstrate the use of unsupervised and supervised machine learning approaches to create "what-if" scenarios for forecasting the long-term impact of changes in socio-economic indicators on health indicators. These indicators include basic sanitation services, immunization, population ages, life expectancy, and domestic health expenditure. To begin, we utilized Hierarchical Cluster Analysis to group 131 countries into 9 clusters based on various indicators from the World Bank Health Statistics and Nutrition dataset. This step allowed us to create clusters of countries. In order to showcase the feasibility of our approach, we performed a time series analysis using multivariate prophet on the most significant features from a cluster consisting of Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia. The study developed robust models (𝑅2 = 0.93+) capable of forecasting 11 health indicators up to 10 years into the future. By employing these "what-if" scenarios and forecasting models, policymakers and healthcare practitioners can make informed decisions and effectively implement targeted interventions to address health-related challenges.
["Clustering", "forecasting", "health econometrics", "data science"]
ABSTRACTData science approaches in Health Econometrics and Public Healthresearch are limited, with a lack of exploration of state-of-the-artcomputational methods. Recent studies have shown that neuralnetworks and machine learning methods outperform traditional sta-tistical methods in forecasting and time-series analysis. In this study,we demonstrate the use of unsupervised and supervised machinelearning approaches to create "what-if" scenarios for forecasting thelong-term impact of changes in socio-economic indicators on healthindicators. These indicators include basic sanitation services, im-munization, population ages, life expectancy, and domestic healthexpenditure. To begin, we utilized Hierarchical Cluster Analysisto group 131 countries into 9 clusters based on various indicatorsfrom the World Bank Health Statistics and Nutrition dataset. Thisstep allowed us to create clusters of countries. In order to showcasethe feasibility of our approach, we performed a time series analysisusing multivariate prophet on the most significant features froma cluster consisting of Bahrain, Kuwait, Oman, Qatar, and SaudiArabia. The study developed robust models ( R2=0.93+) capableof forecasting 11 health indicators up to 10 years into the future.By employing these "what-if" scenarios and forecasting models,policymakers and healthcare practitioners can make informed deci-sions and effectively implement targeted interventions to addresshealth-related challenges.CCS CONCEPTS•Computing methodologies →Modeling methodologies ;•Applied computing →Health informatics ;•Informationsystems→Clustering ;Information systems applications .KEYWORDSClustering, forecasting, health econometrics, data scienceACM Reference Format:Atika Rahman Paddo, Sadia Afreen, and Saptarshi Purkayastha. 2023. Hier-archical Clustering and Multivariate Forecasting for Health Econometrics.InProceedings of epiDAMIK @ SIGKDD Workshop. ACM, New York, NY,USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXXPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] @ SIGKDD Workshop, 2023©2023 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONHealth econometrics is a multidisciplinary field that combines eco-nomics and statistics to study various aspects of healthcare systems,policies, and outcomes. Traditionally, econometric methods havebeen employed to analyze healthcare data, including regressionmodels, panel data analysis, and instrumental variable techniques[20, 7]. However, there is a growing recognition of the potentialbenefits of incorporating these advanced techniques into healtheconometrics research.In today’s interconnected society, understanding the factors thataffect health outcomes is crucial for effective policymaking andhealthcare treatments. With the availability of extensive healthdata, advanced analysis methods can provide valuable insights tosupport evidence-based decision-making. The World Bank’s HealthStatistics collection offers a wealth of data on various health in-dices across nations [26]. In this study, we aim to develop a betterunderstanding of the predefined Gulf Cooperation Council (GCC)countries, which share similar economies and development goals[15]. By utilizing a clustering algorithm, we have identified simi-larities in their health statistics [34]. However, this study does notinclude one of the GCC countries, the United Arab Emirates (UAE).Katoue et al. argued that the health issues faced in the MiddleEast and North Africa regions must be highlighted, as these coun-tries still face challenges in providing equitable and high-qualityhealthcare services. Limited literature supports evidence of im-provements in these areas [13]. To address the health challengesin the GCC countries, including Bahrain, Kuwait, Oman, Qatar,Saudi Arabia, and the UAE, innovative strategies are necessary toimprove the overall health status of the Middle Eastern countries[15, 19]. A United Nations report highlights disparities and com-monalities in health factors among different regions in the Arabworld [31]. While the report suggests that the GCC countries havemade progress in maintaining sanitation and safe drinking water,it is unclear whether all countries in the region will continue withthe same policies in the future [31].This study aims to identify any disparities between countries re-garding uniform healthcare provision. The 2015 World Bank reportemphasizes the impact of health outcomes on health policies andexpenditure in the GCC countries [28]. Changes in health outcomes,such as non-communicable diseases and life expectancy, coupledwith inflation, may create disparities in health expenditure amongthese countries [2].It remains uncertain which countries can improve overall health-care and which may lag behind in developing uniform health poli-cies [8]. Additionally, our research study focuses on populationwell-being, particularly in different age groups, and factors suchepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthaas expenditure, immunization, and survival rates. Understandingthe association between age and other health factors is crucialfor targeting "age-specific" policies in healthcare management anddisease prevention [9]. This is significant in terms of healthcaremanagement and disease prevention.This research paper combines cluster analysis, feature impor-tance analysis, and multivariate time series modeling to uncover theunderlying factors influencing health outcomes within a selectedcluster comprising five GCC countries: Bahrain, Kuwait, Oman,Qatar, and Saudi Arabia. The findings contribute to a deeper under-standing of the complex dynamics of health indicators and provideactionable insights for policymakers and healthcare professionals.2 RELATED WORKSBalçik et al. [5] conducted a study on clustering algorithms thatis similar to ours. They focused on the hierarchical clustering ofEuropean Union countries based on preselected features to analyzehealthcare development. Their clustering results were evaluatedusing statistical differences between indicator values. Similarly,Raheem et al. [29] approached their objective using the silhouettescore, providing a clearer context for distinguishing clusters. Whileboth approaches seemed reasonable, we opted to use the silhouettescore in our study to understand the distinctiveness of our clusters,which yielded high accuracy in identifying cluster formation.Several studies have been conducted on a national level usingclustering approaches to determine differences in health indicatorsand gain insights into various countries. Proksch et al. [27] analyzedthe clustering of 30 OECD countries to identify the varying aspectsof health that differentiate these clusters. Muldoon et al. [23] andLefèvre et al. [17] explored similarities among countries and theircontributions to health factors. The former focused on mortalitysignificance, while the latter employed a multivariate clusteringapproach to identify patterns in population and healthcare systems.In contrast to these studies, our research includes a forecastingapproach, which provides predictive conclusions for policymakers,analysts, and health practitioners.Levantesi et al. [18] also utilized a multivariate forecasting ap-proach to develop a predictive understanding of healthcare, albeitnot aligned with the Prophet model. Khan & Noor [14] explored theapplication of the Prophet time series approach to visualize futurehealth outcomes, but their study employed a univariate Prophet ap-proach. In our study, we employed a multivariate Prophet approach,which offered a unique perspective by determining the relationshipbetween changes in one indicator and another more accurately.Ahmed et al. [1] and Ampofo & Boateng [4] also adopted interest-ing approaches using multivariate Prophet, focusing specifically oncardiovascular and diabetes health sectors, respectively.Therefore, our research aims to establish a comprehensive as-sociation among predicted population well-being, which can beutilized to advance our understanding of healthcare outcomes.3 METHODOLOGYThe methodology utilized in this research paper followed a se-quential process to analyze health data. Firstly, the data underwentpreprocessing. Next, a dendrogram was constructed using the Wardmethod to identify clusters. A threshold was applied using the ’fclus-ter’ function to determine the number of clusters. Afterward, theimportant features for each cluster were identified using a thresholdof 0.615. We employed the multivariate Prophet method for time se-ries forecasting and predicting future trends. Finally, statistical testswere conducted on the features to identify significant differencesin the upcoming years.3.1 Data CollectionWe obtained the Health Statistics and Nutrition dataset from TheWorld Bank, which offers comprehensive health indicators for vari-ous countries from 1960 to 2021.3.2 Data Preprocessing3.2.1 Data Cleaning. Initially, the original dataset contained in-formation for 266 countries/regions and 255 indicators. To focuson a specific midway time shot, we selected data from 2000. Weexcluded regional aggregations from the dataset (EU, AFRO, etc.)and countries with significant missing values for most indicators(e.g., United Arab Emirates, Aruba, Afghanistan, Poland, Barbados,Guinea). Additionally, we removed indicators with extensive nullvalues across countries. Any remaining null values for a countrywere imputed using the median of that column. After cleaning, thedataset comprised 134 countries and 128 variables.3.2.2 Data Scaling using Min-Max Scaler. To ensure consistencyand prevent any single feature from dominating the analysis, wescaled the data using the Min-Max Scaler [6]. This scaling techniquetransformed the data to a predefined range of 0 to 1 by subtract-ing the minimum value and dividing by the range. This processnormalized the data within the [0, 1] range.3.3 Clustering3.3.1 Linkage Matrix. Next, we computed the linkage matrix usingthe linkage function from the scipy.cluster.hierarchy module. Thelinkage matrix represents the hierarchical clustering structure ofthe data based on pairwise distance calculations.3.3.2 Creating a Dendrogram using Ward’s Method. We employedWard’s method to construct a dendrogram, which visually displaysthe hierarchical relationships among the data points [24]. Ward’smethod minimizes the total within-cluster variance at each step ofdendrogram creation. The resulting dendrogram exhibited hierar-chical clustering patterns from a distance scale of 0 to 27, aiding inunderstanding the grouping patterns within the data (see Fig. 1).3.3.3 Determining the Number of Clusters using fcluster. The num-ber of clusters was determined by assigning data points to clustersbased on a given threshold using the fcluster function. A thresholdvalue of 5 was chosen to define the clusters within the dataset. Thefcluster function, with the specified threshold, provided the clusterassignments for each data point. The above threshold resulted in 9clusters.3.3.4 Evaluation Metrics for Each Cluster: To assess the qualityof the clustering results and evaluate the fit of each data point toits assigned cluster, we calculated the Silhouette score for eachcluster. The Silhouette score measures both the cohesion withinHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023Figure 1: Linkage matrix of nine clusters for the countries in a dendrogrameach cluster and the separation between clusters [32, 25]. The scorewas calculated using equation 1.Silhoutte =Í bi−aimax(ai,bi)n(1)where,aiis the average distance between each sample for i=1,2,3,...n and all other points in its cluster. For each other cluster inthe dataset, the average distance between the sample and all pointsin that cluster is noted and the minimum of these distances is b.nis the total number of samples. To calculate per cluster Silhouettescore,arepresents the average distance between the data pointand other data points within the same cluster and brepresents theaverage distance between the data point and the data points in thenearest neighboring cluster.The Silhouette score ranges from -1 to 1, with a higher scoreindicating better clustering results. A score close to 1 signifies well-separated clusters, while a score close to -1 suggests overlapping orincorrectly assigned clusters. The average silhouette score of all datapoints within the cluster was calculated to obtain the silhouettescore for each cluster. Based on the silhouette score and moreattainable count of the cluster, cluster-8 was chosen for furtheranalysis of time series forecasting.3.3.5 Using hierarchical clustering over other clustering methods:We chose hierarchical clustering using Ward’s method for our anal-ysis of the health statistics and nutrition dataset. Hierarchical clus-tering allows us to explore the data in a hierarchical structure,capturing both global and local patterns of similarity. It is well-suited for datasets with arbitrary cluster shapes and sizes, makingit suitable for analyzing health indicators across countries.3.4 Feature SelectionFollowing the clustering of the countries, our focus shifted to pin-pointing the most crucial characteristics. We accomplished this byimplementing the sklearn library to perform feature selection. Weevaluated 26 key features within the selected cluster, which rankedwithin the top percentile (Table 1).3.4.1 Feature Importance Analysis for Each Cluster. Centroids, orrepresentative data points for each cluster, were determined byaveraging the scaled data. The significance of each feature wasascertained by arranging the feature values in descending order.A threshold of 0.815 yielded fewer features and did not provide acomprehensive outlook for health predictions. As a result, we optedfor a threshold of 0.615, which allowed us to conduct a time seriesforecast with a broader feature set.3.5 Statistical TestsOur reference timeframe was set to the year 2000 for initiating thetime series forecast, and we examined the data for each indicatorwithin the clustered countries. The Kruskal Wallis non-parametrictest served as an effective method for determining value signifi-cance [36]. We utilized this test to discern statistically significantdiscrepancies among the indicators’ values across different coun-tries. After projecting the values for the next decade (2022-2031), werepeated the statistical test on these forecasted values to highlightsignificant differences between countries.3.6 Time-Series Forecasting3.6.1 Data Processing for Time-Series Analysis. Several factors wereconsidered when preparing this data for modeling.Selection of Time Frame: To forecast future health statistics forthe clustered countries, we opted for the most recent data to trainthe multivariate Prophet model. Our dataset encompassed healthdata from 1960 to 2021, but for our purposes, we narrowed thetimeframe to 2000 to 2021. This eliminated the need for imputingdata from distant years.Reduction of Features: The initial feature importance analysisidentified 26 features for the study. However, two features (Cause ofdeath, by non-communicable diseases (% of total) and Internationalmigrant stock (% of population)) had a high percentage of missingvalues across all clustered countries, accounting for up to 81.82%of total data. That is why excluded these indicators and kept 24features.Imputation of Time-series Data: We identified missing valueswithin our set of 26 features, necessitating imputation for a completetime-series dataset. We used Naïve forecasting to fill in the missingdata for the years from 2000 to 2021. If a specific year’s data wasmissing for a particular country’s indicator, we filled the gap usingepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 1: FEATURE IMPORTANCE FOR CLUSTERIndicator Name Indicator CodeFeatureImportanceValue1 People using at least basic sanitation services (% of population)§SH.STA.BASS.ZS 0.97432 Immunization, measles (% of children ages 12-23 months) SH.IMM.MEAS 0.96063 People using at least basic drinking water services (% of population) SH.H2O.BASW.ZS 0.95854 Immunization, DPT (% of children ages 12-23 months)§SH.IMM.IDPT 0.92575 Survival to age 65, male (% of cohort) SP.DYN.TO65.MA.ZS 0.87536 Survival to age 65, female (% of cohort)†SP.DYN.TO65.FE.ZS 0.87527 Population ages 25-29, male (% of male population) SP.POP.2529.MA.5Y 0.85838 Population ages 20-24, female (% of female population) SP.POP.2024.FE.5Y 0.84379 Life expectancy at birth, total (years)†SP.DYN.LE00.IN 0.822710 Population ages 25-29, female (% of female population)†SP.POP.2529.FE.5Y 0.821611 Life expectancy at birth, female (years)†SP.DYN.LE00.FE.IN 0.795412 Population ages 30-34, male (% of male population) SP.POP.3034.MA.5Y 0.765113 Cause of death, by non-communicable diseases (% of total)∗SH.DTH.NCOM.ZS 0.756714 Population ages 20-24, male (% of male population)§SP.POP.2024.MA.5Y 0.752715 Population ages 30-34, female (% of female population) SP.POP.3034.FE.5Y 0.731816 Population ages 15-64, male (% of male population) SP.POP.1564.MA.ZS 0.72217 Population ages 15-64 (% of total population)†SP.POP.1564.TO.ZS 0.707718Domestic general government health expenditure(% of current health expenditure)SH.XPD.GHED.CH.ZS 0.700719 Population ages 35-39, male (% of male population) SP.POP.3539.MA.5Y 0.691420 Population growth (annual %)¶SP.POP.GROW 0.68921 International migrant stock (% of population)∗SM.POP.TOTL.ZS 0.684222 Population ages 05-09, female (% of female population) SP.POP.0509.FE.5Y 0.673423 Population ages 10-14, female (% of female population)†SP.POP.1014.FE.5Y 0.668624 Population ages 0-14, female (% of female population)†SP.POP.0014.FE.ZS 0.661525 Population, male (% of total population)†SP.POP.TOTL.MA.ZS 0.659526 Population ages 15-19, female (% of female population)†SP.POP.1519.FE.5Y 0.6388∗Removed because of having 81.82% values as missing from the year 2000 to 2021.†Removed because of having highly correlation with other important feature(s) which were in higher rank according to feature importance.§Removed for poor predictions from the univariate Prophet model and were not used in multivariate model training.¶Removed because of having negative values in some years, thus log transform scaling could not be done, thus removed in the forecasting.the preceding year’s data for that same indicator. This resulted in acomplete time-series dataset with 24 features for five countries.Logarithmic Scaling on Time-series Data: Prior to forecasting,we performed a logarithmic transformation for data scaling andreverted to the original values for performance measurement. Al-though the MinMax Scaling algorithm was used initially, we choselogarithmic scaling for the time series forecast. This decision wasbased on the lower error rate found with logarithmic scaling whenreturning to the original data [20].3.6.2 Prophet Forecasting Model to Predict Indicator Values. Ourapproach to predicting yearly indicators’ values for the clusteredcountries and important features involved using multivariate mod-eling in Prophet. This is what enables "what-if" analysis for forecast-ing health indicators. If we simulate or forecast individual predictorindicators and guide policy, we can see the effects of those simula-tions on our final multivariate model. This is crucial to understandhow these indicators’ forecasts varied per country and whether theProphet model’s results were consistent for all clustered countries.Univariate Prophet Model. The univariate Prophet model focuseson forecasting a single time series taking into account the historicalvalues of the target variable and identifies patterns and trends tomake future predictions. The model captures seasonality ( s(t)),trend (g(t)), holiday effects ( h(t)) (if any) and error ( ε(t)) usingadditive regression components.y(t)=g(t)+s(t)+h(t)+ε(t) (2)In our work, we have used a Univariate Prophet model to forecastthe predictor values for the future. However, if existing econometricmodels of varied types are more suited for a particular indicator,then those can also be used. The univariate model for each predictorbuilt the future dataframe for the years 2022 to 2031 (10 years).Multivariate Prophet Model. The multivariate Prophet model ex-tends the univariate model by incorporating additional exogenousvariables or features as regressors that can influence the target vari-able. These additional exogenous variables ( f1(t),f2(t),...,f n(t))can be other time series data or external factors such as economicindicators. In this work, we have incorporated other indicators inthe health statistics data as regressors to predict specific indicatorsone by one. By including these variables, the model can capturetheir impact on the target variable and improve the accuracy ofpredictions.y(t)=g(t)+s(t)+h(t)+f1(t)+f2(t)+...+fn(t)+ε(t)(3)By incorporating relevant external factors, the multivariate modelcan capture additional information and dependencies that impactthe target variable. This can lead to more accurate and reliable pre-dictions. Including additional variables provides insights into theHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023factors driving the target variable’s behavior. It enables a better un-derstanding of the system’s relationships and dependencies amongdifferent variables. This also allows for customization based on thespecific requirements of the forecasting problem. But to incorpo-rate multivariate forecasting, we also found additional complexity,such as complex data preprocessing, feature selection, and potentialcorrelation considerations.The code to replicate this study can be found at:https://github.com/iupui-soic/WB-cluster-forecast.4 RESULTS4.1 ClusteringWith a distance threshold set at 5, our cluster dendrogram (Fig. 1)presented nine (9) visually distinct clusters.The Silhouette score, a measure used to evaluate the clusters andthe countries within the nine clusters, is displayed in Table 2.Table 2: CLUSTERED COUNTRIES AND EVALUATION MET-RICCluster #ClusterSilhouetteScoreCountries1(European Countries)0.2914Bulgaria, Belarus, Czechia, Estonia,Croatia, Hungary, Lithuania,Latvia, Slovenia, Ukraine2(European,North American,Oceanian Countriesand Japan)0.4851Australia, Austria, Belgium, Canada,Switzerland, Germany, Denmark, Spain,Finland, France, United Kingdom, Greece,Ireland, Iceland, Italy, Japan, Luxembourg,Netherlands, Norway, New Zealand,Portugal, Sweden, United States3(East & West African,South Asian andOther Countries)0.4227Benin, Bangladesh, Congo, Comoros,Eritrea, Ghana, Gambia, Haiti,Cambodia, Madagascar, Mauritania,Nepal, Pakistan, Senegal, Togo, Yemen4(Southern AfricanCountries)0.2484 Botswana, Lesotho, Namibia, Eswatini5(African Countries)0.3309Burundi, Burkina Faso, Cameroon,Ethiopia, Kenya, Liberia, Mali,Mozambique, Malawi, Niger, Nigeria,Rwanda, Sierra Leone, Chad, Tanzania,Uganda, Zambia6(Ensemble ofCountries fromDifferent Regions)0.6693Albania, Argentina, Armenia, Bahamas,Bosnia and Herzegovina, Brazil, Barbados,Chile, Colombia, Costa Rica, Cuba, Cyprus,Georgia, Israel, Jamaica, Kazakhstan,Sri Lanka, Moldova, Malta, Mauritius,Panama, Singapore, Seychelles, Thailand,Uruguay7(Large EconomyCountries in Asia)0.5667 China, India8(Middle EasternCountries)0.6597 Bahrain, Kuwait, Oman, Qatar, Saudi Arabia9(Ensemble ofCountries fromDifferent Regions)0.3282Azerbaijan, Belize, Bolivia, Algeria, Ecuador,Egypt, Fiji, Guatemala, Guyana, Indonesia,Iran, Jordan, Kyrgyz Republic, Kiribati,Lebanon, Morocco, Maldives, Mexico,Myanmar, Mongolia, Malaysia, Peru,Philippines, Paraguay, Solomon Islands,El Salvador, Turkmenistan, Tonga,Tunisia, Uzbekistan, Vietnam, VanuatuFigure 2: Time-series Yearly Data and Future Forecasts for Qatarusing Univariate Prophet ModelFigure 3: Time-series Yearly Data and Future Forecasts for Qatarusing Multivariate Prophet Model4.2 Feature RelevanceWe analyzed correlations between the features. If an indicatordemonstrated a strong positive or negative correlation with anyother indicators in the dataset, we excluded it. We retained onlythose indicators that didn’t correlate highly with others. This pro-cess yielded 15 indicators out of the original 26 in Cluster-8 shownin Table 1.4.3 Time-Series ForecastingOur secondary objective was to apply a multivariate time seriesforecasting Prophet model to the significant indicators of the fivecountries within a cluster [35]. A preliminary statistical test high-lighted similarities in the indicators’ values for the year 2000.4.3.1 Outcome of Feature Reduction. Due to many missing values,we excluded two features identified through feature importance.We also removed nine indicators that exhibited a high correlationwith other significant features and one indicator that displayed neg-ative values, which was unsuitable for logarithmic transformation.Consequently, we proceeded with univariate forecasting for theremaining 14 indicators.epiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 3: ACCURACY METRICS FOR THE FORECASTED INDICATOR VALUES AMONG THE COUNTRIESIndicatorsRMSE MAPE R2Adjusted R2Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Population ages 30-34, male 0.0001 ±0.0001 0.5941±0.3203 0±0 0.0401±0.0259 1±0 0.5997±0.3132 1±0 0.5497±0.3523Population ages 30-34, female 0.0001 ±0.0001 0.2563±0.0961 0±0 0.0216±0.0109 1±0 0.6592±0.468 1±0 0.6166±0.5265Population ages 35-39, male 0.0002 ±0.0002 0.3445±0.1631 0±0 0.0259±0.0093 1±0 0.6581±0.2856 1±0 0.6154±0.3213Population ages 25-29, male 0.0059 ±0.0127 1.1566±0.7031 0.0006±0.0013 0.071±0.0374 1±0 0.6031±0.252 1±0.0001 0.5535±0.2835Population ages 20-24, female 0.0287 ±0.0637 0.4546±0.3414 0.0032±0.0072 0.0421±0.0381 0.9979±0.0046 0.5067±0.3441 0.9956±0.0097 0.445±0.3871Population ages 15-64, male 0.001 ±0.0012 1.2822±0.8086 0±0 0.0143±0.0092 1±0 0.4109±0.6021 1±0 0.3372±0.6774Population ages 05-09, female 0.0001 ±0.0001 0.5177±0.1904 0±0 0.0458±0.0212 1±0 0.0855±1.1177 1±0 -0.0288±1.2574Survival to age 65, male 0.001 ±0.0007 1.1749±0.7324 0±0 0.0125±0.0089 1±0 0.5497±0.6035 1±0 0.4935±0.6789Domestic general governmenthealth expenditure0.4999±0.498 2.3871±1.0832 0.0058±0.0059 0.0282±0.015 0.9681±0.0409 0.4775±0.2928 0.933±0.0859 0.4122±0.3294Immunization, measles 0.0009 ±0.0008 1.0328±0.6968 0±0 0.0086±0.0054 1±0 0.2123±0.2276 1±0 0.1139±0.256People using at least basicdrinking water services0.0008±0.0006 0.2138±0.3582 0±0 0.002±0.0035 0.7997±0.4471 0.5849±0.3723 0.5794±0.9388 0.533±0.41894.3.2 Statistical Testing on the Existing Indicator Values. We per-formed the Kruskal Wallis test on the values of the 15 indicators forthe countries within the clusters. The resulting p-values were allgreater than 0.05, suggesting no statistically significant differencesamong the values of the indicators within the clustered countries.Since these indicators demonstrated similar values across countries,we continued with time series forecasting.4.3.3 Univariate & Multivariate Prophet.Future Dataframe. Univariate Prophet modeling produced reli-able predictions for most indicators, yielding low RMSE & MAPEand betterR2value. However, three indicators demonstrated infe-riorR2values compared to others, leading us to exclude them fromthe multivariate models. These indicators were: Population ages20-24, male (% of male population), Immunization, DPT (% of chil-dren ages 12-23 months), and People using at least basic sanitationservices (% of the population).Future Forecasts. The multivariate Prophet model generated fore-casts for each of the 11 indicators under consideration. In eachforecast, the multivariate model included 10 additional regressorscorresponding to the other 10 indicators, serving as predictorsexcluding the target indicator. The accuracy metrics for the multi-variate models are detailed in Table 3. The univariate forecastingmodel predicted 15 indicators for a sample country (Qatar), andthe multivariate model predicted 11 indicators (see Fig. 2 and Fig.3 respectively). These figures illustrate the multivariate Prophetmodel’s superior forecasting performance. The combined forecastsfor the clustered countries (Bahrain, Kuwait, Oman, Qatar, andSaudi Arabia from the year 2000 to 2031 for all 11 indicators areillustrated in Fig.4 with continuous error bar plots. The differencesin the indicators in the future years can be seen in Fig. 44.3.4 Statistical Analysis on the Forecasting. The future forecastedindicator values also showed statistically significant differences(p<0.05) among the countries, highlighting that the forecasted tra-jectory of the countries might be changing in the future based on thealready changing nature of predictors. Using univariate forecasting,such modeling would not have been possible.5 DISCUSSIONHealth econometrics analyses have traditionally relied on cross-country surveys like the National Family Health Survey (NFHS) andthe Demographic Health Survey (DHS). They often employ logisticregression and other statistical techniques for comparing countries[20, 33]. Among unsupervised statistical approaches, I-distance[11, 12] has been utilized for ranking purposes, including coun-tries based on health indicators. However, our study presents thepotential of enhanced clustering machine learning techniques formanaging multiple related variables, particularly for large datasets.[21].Notably, certain clusters, such as Cluster-4 and Cluster-8, displaygeographical and cultural similarities. The cluster linkage cutoffwould need to be significantly lowered to establish more readilyapparent similarities within each cluster. However, this could leadto fewer predictor indicators, affecting our features of importance.If we expand the indicators used in feature selection, we risk com-plicating the model and reducing its interpretability. [16].Other clustering algorithms, especially spectral clustering, whilea powerful technique in certain cases, may not always be the mostappropriate choice. It operates based on graph theory principlesand requires constructing a similarity matrix and computing eigen-vectors, which can be computationally expensive and memory-intensive for larger datasets. Spectral clustering also contains astochastic factor which was avoided by using hierarchical cluster-ingGiven the size and nature of our dataset, hierarchical clusteringwith Ward’s method proved to be a more scalable and efficient op-tion. It aligns well with our goals of exploring hierarchical patternsand capturing diverse cluster shapes in the health and nutritiondataset. Hierarchical clustering also provided meaningful insightsinto the health indicators across countries. Along with this, Loga-rithmic scaling on the dataset provided less mean squared error ona whole in the prediction of the future features’ values comparedto Min-Max scaling.While our models present robust and meaningful findings, theyalso highlight some challenges that need to be considered in futurestudies. A critical point is the trade-off between the granularity ofclustering and the complexity of multivariate models. While deeperclustering might yield more nuanced insights, it can also reduceHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023the number of predictor indicators and increase model complexity.It calls for a balanced approach to ensure the interpretability andpractical utility of the models.Additionally, our multivariate forecasting model is predicatedon current and past trends. The dynamic nature of health indi-cators and their susceptibility to various external factors such aspolitical changes, economic fluctuations, or global health crises,might alter these trends significantly. Future research must con-sider these potential disruptions and explore methods to accountfor such unpredictability.Further, we could determine certain associations by understand-ing the identification of statistical differences amongst featuresthat we obtained after analysis and predictions from a multivariatemodel. Viewing Fig.4i, where Qatar’s future prediction on healthexpenditure seems to decline, and Fig.4j also indicates a declinein immunization. Similar declines are seen in female populationages who are potentially at a maternal period (Fig.4c and 4e). Wedrew validating conclusions that our multivariate prophet modeldetermines the reliance of a feature on another feature for a country[22]. This can aid the several health assessment research associatedwith various indicators such as work by Amoatey et al. [3].Recognizing these trends and connections could guide policy-makers or health practitioners toward effective strategies for im-proving overall health outcomes. Moreover, our predictions con-sider various population age groups, offering a comprehensiveperspective on health prospects [9]. Our study’s application of mul-tivariate forecasting allowed us to predict future health outcomesbased on current trends and patterns. This model has allowed us toproject possible trajectories for various health indicators in the Mid-dle Eastern countries cluster, aiding in long-term strategic healthplanning for the region. The associations identified between differ-ent features underline the interconnectedness of health outcomes,signaling the necessity for an integrated approach to healthcarepolicy.5.1 LimitationsThis study has its limitations. Although we selected 26 indicatorsfrom the World Bank dataset’s total of 128, not all could be incorpo-rated into our multivariate prediction model. For example, the Popu-lation Growth indicator was excluded because it contained negativevalues incompatible with logarithmic transformation. However, our(a)Population ages 35-39, male (% of malepopulation)(b)Population ages 30-34, male (% of malepopulation)(c)Population ages 30-34, female (% of femalepopulation)(d)Population ages 25-29, male (% of malepopulation)(e)Population ages 20-24, female (% of fe-male population)(f)Population ages 15-64, male (% of malepopulation)(g)Population ages 05-09, female (% of fe-male population) (h)Survival to age 65, male (% of cohort)(i)Domestic general government health ex-penditure (% of current health expenditure)(j)Immunization, measles (% of children ages12-23 months)(k)People using at least basic drinking waterservices (% of population)Figure 4: Forecasts of each indicator for five clustered countries∗∗Blue forecast lines are for Bahrain; Orange forecast lines are for Kuwait, Green forecast lines are for Oman, Red forecast lines are for Qatar, and Purple forecast lines are for SaudiArabiaepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthamodel’s predictions could be significantly influenced by the inclu-sion of this indicator. Similarly, other omitted indicators could haveoffered additional insights into overall health outcomes.5.2 Future WorkFuture work could involve constructing a more informative modelwith an expanded set of features or a larger cluster of countries.Techniques like Neural Prophet [37], DeepAR [30], or even simplermodels like Random Forest Regressor [10] could be explored. Alter-native approaches to constructing future data frames, such as AutoARIMA, could yield more reliable results.6 CONCLUSIONIn conclusion, our study has identified key factors influencing healthoutcomes in selected Gulf Cooperation Council (GCC) countries(Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia). We highlightedthe importance of population wellness and age-specific strategiesin healthcare management and disease prevention. Our method in-volved data preprocessing, clustering using Ward’s method, featureselection, and time series forecasting with multivariate Prophet.This research provides a comprehensive approach to health dataanalysis, identifying crucial health outcome influencers, and deliv-ering actionable insights for policymakers and healthcare profes-sionals using machine learning and forecasting techniques.
LhpGGbQT8JL
Reject
1: Ok but not good enough - rejection
In this paper, the authors used hierarchical clustering to group 131 countries into several clusters and then performed a time-series forecasting for the cluster consisting of several Middle Eastern countries. While forecasting socio-economic and health indicators is important for policymaking, the methods used in this study are relatively simple. I have a few concerns about the study and results. 1. Both the clustering and forecasting methods are off-the-shelf approaches. It is not clear the methodological novelty of this study. For instance, time-series forecasting is widely used in other studies. 2. There was no comparison of the forecasting method with other approaches. There should be more accurate forecasting methods, and the authors did not establish the advantage of the current method in this study. 3. Lack of details. Many technical details were not provided in the manuscript. For instance, what features were included in the dataset? What additional variables were used in the multivariate forecasts, and how to select those variables? How did the authors select the prediction target variables in Table 3? Was it because the forecasting method worked better for those targets? 4. In Eq. (3), the prediction of y(t) needs the input of exogenous variables in the future, which is not available when the forecast is generated. How to solve this issue? How to decide which exogenous variables to include?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
BNU_N-7EIR
KDD.org/2023/Workshop/epiDAMIK
2023
Pandemic Data Collection, Management, Analysis and Decision Support: A Large Urban University Retrospective
["Namrata Banerji", "Steve Chang", "Andrew Perrault", "Tanya Berger-Wolf", "Mikkel Quam"]
The COVID-19 pandemic has disrupted the world. During this crisis, data has emerged as a critical resource for understanding, monitoring, and mitigating the impact of the disease. We present The Ohio State University's data-driven framework for comprehensive monitoring of the COVID-19 pandemic. We discuss the challenges associated with data collection, investigate the roles and limitations of data analysis in supporting intervention choice and implementation strategies amid the complexities of the pandemic as it unfolded. Balancing privacy, consent, and transparency and ensuring the responsible handling of sensitive information is crucial in maintaining public trust. We examine privacy-preserving techniques, ethical frameworks, and legal regulations aimed at safeguarding individuals' rights while harnessing the power of data. In our experience, conscientious data architecture provided a foundation for meaningful ethical applications of data products, which not only helped mitigate the current crisis, but also can provide valuable insights for better addressing future public health emergencies.
["datasets", "public health", "data management", "ethics"]
Pandemic Data Collection, Management, Analysis andDecision Support:A Large Urban University RetrospectiveNamrata [email protected] Ohio State UniversityColumbus, Ohio, USASteve [email protected] Supercomputer CenterColumbus, Ohio, USAAndrew [email protected] Ohio State UniversityColumbus, Ohio, USATanya Y. [email protected] Ohio State UniversityColumbus, Ohio, USAMikkel [email protected] Ohio State UniversityColumbus, Ohio, USAFigure 1. Archived OSU Safe & Healthy COVID-19 Dashboard for November 2, 2020AbstractThe COVID-19 pandemic has disrupted the world. Duringthis crisis, data has emerged as a critical resource for un-derstanding, monitoring, and mitigating the impact of thedisease. We present The Ohio State University’s data-drivenframework for comprehensive monitoring of the COVID-19pandemic. We discuss the challenges associated with datacollection and investigate the roles and limitations of dataPermission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).analysis in supporting intervention choice and implemen-tation strategies amid the complexities of the pandemic asit unfolded. Balancing privacy, consent, and transparencyand ensuring the responsible handling of sensitive infor-mation is crucial in maintaining public trust. We examineprivacy-preserving techniques, ethical frameworks, and legalregulations aimed at safeguarding individuals’ rights whileharnessing the power of data. In our experience, conscien-tious data architecture provided a foundation for meaningfulethical applications of data products, which not only helpedmitigate the current crisis, but also can provide valuable in-sights for better addressing future public health emergencies.CCS Concepts: •Information systems →Database ad-ministration ;•Applied computing →Health care infor-mation systems .Keywords: datasets, public health, data management, ethicsepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamACM Reference Format:Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam. 2023. Pandemic Data Collection, Man-agement, Analysis and Decision Support: A Large Urban Univer-sity Retrospective. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 8 pages.1 IntroductionThe onset of the COVID-19 pandemic in early 2020 was oneof the most significant and life changing events for everyoneon the planet, impacting everything from small businessesto entire countries. In case of educational institutions, the in-definite suspension of classes, upending of every traditionalaspect of academic and student life, and the transition tovirtual education was stressful for students, staff, and facultyalike. The Ohio State University (OSU), a large urban edu-cational institution, undertook a massive policy response tosupport the continuing function of the university by moni-toring and managing the dynamics of the pandemic on andaround its campuses. Putting together a coalition of epidemi-ologists, data scientists, public health policy makers wasonly the first step of what shaped up to be at least a threeyear marathon. Data was at the center of the whole process,both as the decision enabler and as the product of many ofthe contributing efforts. To make data actionable requiredthe work of many teams and several iterations of cleaning,analysis and inference, and visualization. In this paper, wepresent the overall data-focused aspects of the process, high-lighting the achievements and the hindrances, as well asthe major takeaways, so that we are better prepared for fu-ture public health emergencies or other large scale collectiveresponses. This manuscript, besides serving as a piece ofinstitutional memory, communicates in detail the variousobstacles encountered in the handling of the mammoth datafor the data science community to be aware of. Among themain takeaways we consider the effectiveness of the datadriven approaches for managing the pandemic response, theneed for an institutional data infrastructure, and the impor-tance of a well organized team of experts and professionalsworking together towards a well-defined goal.2 OverviewThe Ohio State University stood up the Comprehensive Mon-itoring Team (CMT) [ 4] to include a framework of supportfor data driven decisions for pandemic management, includ-ing robust case finding (via serial mass administration ofindividual PCR tests with rapid in-house processing), lo-cally administered isolation of cases, contact tracing andquarantine of close contacts, as well as data integration, anal-ysis, modelling, risk evaluation, policy recommendations,and intervention implementation based upon knowledge de-rived from individual case management, subsequent viral(genomic) sequencing, large scale syndromic surveillanceand evidence of environmental (wastewater and dust) shed-ding [ 6,12,14,15]. Here we present the core of the datacomponent of this system that integrated data from varioustesting centers, conducted daily analyses, and representeddata in formats usable by the leadership to support bothindividual level contact tracing and the university’s policyresponse to the public health emergency. In the coming sec-tions, we discuss the goal of setting up such a system, theimplementation pipeline, data sources and some of the chal-lenges and takeaways.3 GoalsBuilding and maintaining such a huge framework and em-ploying a whole workforce including faculty, students, health-care workers consumes university resources at a large scale.The goals were the result of several rapid iterations of con-vergent conversations between the university administrationand members of the CMT, as well as the consultations withexternal experts. The specific aims of the data componentsof the framework were as follows:•Tracking the positivity rate. Positivity rate or testingpositivity rate, defined as the percentage of tests reportedthat are positive [ 10], emerged early in the pandemic asthe agreed upon indicator of the state of the populationand the basis for comparing different populations [ 9]. Weused the positivity rate, throughout the monitoring processdue to a number of reasons, one of them being that thispercentage (sometimes a fraction) was the most expressiveand conveyed a more complete story than other measuressuch as absolute number of positive cases. It is true that100% of the university population was not being tested,because there were exemptions (medical and otherwise)and non-compliants, but we had the data necessary to de-termine exactly what fraction of the population was beingtested. This was the best metric that we could monitorfrom the data and information available to us at the time,and it never became a cause for concern.•Contact tracing. Removal of positive and potentially pos-itive cases from the population is key for suppressing thespread of the virus [ 8,17]. It was necessary to providecontact information for people who tested positive and toidentify and contact their close contacts in order to isolateand quarantine them, respectively.•Understanding the micro trends and risks based onevents. To understand the dynamics, the risks, and theimplications of the pandemic for various subpopulations itwas necessary to provide the ability to zoom in on specifictime intervals and subgroups in the data. Examples of thequestions asked include: How does fall break or Halloweenbehaviour change/impact infection rates? Is there an in-creased risk of students in a 4-person suite over a 2-personPandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAdorm room? How do the risks associated with in-personclasses compare with hybrid or remote classes?•Supporting daily policy decisions of a large urbanuniversity. Daily decisions supported by data includedthe choice of a testing strategy and protocol, transition tohybrid vs online only classes, occupancy in classrooms,vaccination and masking requirements, etc. Having accessto the right data was essential. The testing protocol [ 3,16] was more strict in the early days of the pandemic,requiring all students who live in residence halls or whohave at least one in-person class to test at least once everyweek. The requirements were relaxed in the subsequentsemesters. Testing mandates were also in place aroundholidays, for example, students were required to test beforea Thanksgiving break and after. The WiFi data was oftenutilized to get a sense of how many students were stillresiding in the dorms over the break, and how many wenthome.•Reducing burden in the wider population. OSU Colum-bus campus is a large urban campus with highly permeableboundary in the center of a city. In order to contain thepandemic, the infection rates needed to be controlled bothon and around campus. Moreover, the university sought tomitigate the export of infections to communities beyond itscampuses. College students mix with the city populationand visit their family over academic breaks, potentially in-creasing the risk of transmission to vulnerable communitymembers. Recommending and at times requiring testingbefore the academic breaks was one such measure takento reduce the burden on vulnerable immuno-compromisedpopulation outside the university.4 ImplementationOSU has 68,000 students, 12,000 of which reside in residencehalls during a regular year. During the pandemic, about 8,000students were in residence halls and were required to testweekly. Additional students, faculty, and staff were testingvoluntarily. At its peak, more than 30,000 tests per weekwere processed.Multiple teams across Information Technology support,Student Life, Translational Data Analytics Institute (TDAI),Infectious Disease Institute (IDI), University Medical Centers,College of Public Health, and many more were responsiblefor standing up a system that would be in place for at leastthe next 3 years. The data environment was a secure and flex-ible environment that allowed for dynamic data definitionand integration of data from at least 56 sources when it wasintroduced. (The number of data sources grew to over 100by the end of 2022.) Initial data sources included testing datatogether with the administrative data of student information,residence and permanent addresses, demographics, class reg-istration, residence layout, class and college affiliations, WiFiaccess point information, and much more. The pipeline isillustrated in Figure 2 and is described very briefly below.•Primary test data was transmitted into the internal securedata environment via electronic file transfer multiple timesa day.•Additional attributions from other internal OSU systems(Identity management (IDM), Student Information Systems(SIS), Student Life, etc.) were preloaded and updated accord-ing to the system’s change protocol (e.g. each semester).•Test results and internal data were combined into a cohe-sive reusable dataset (AKA the “gold table").•Analysts and dashboard builders utilized a common sourcefor all reports and visualizations.•Data was also sent to Helpspot/Salesforce to support caseinvestigation and contact tracing efforts.4.1 Data description and daily analysisAmong the 50+ tables and views that were maintained onAWS, there were 10-12 datasets, described below, that weremost frequently accessed for daily analysis reports.•‘Gold’ dataset of people : This view is derived from mul-tiple tables, that contain individuals’ unique identifiers,demographic information such as gender, race, ethnicity,age, home and campus address, affiliation with the univer-sity, affiliation with an OSU campus, indicators of whethertheir on or off campus, student housing residence, etc.There are roughly 2.5 million entries in this dataset, withupdates at regular time intervals of changing affiliations,addresses, and other variables.•‘Gold’ dataset of tests : Similar to the gold person table,this is also a derived view of data on tests administered bythe university that combines variables like test providername, test administered time, test result time, test result,type of test conducted, etc. It also contained some of thedemographic information and addresses so that quick re-sults could be obtained by running simple queries, withoutjoining multiple tables.•Dataset on off campus residence housing : This datasetcontains information on what organizations individualsare a member of, whether they are an active member,whether they live in the organization housing, etc. Thiswas a particularly useful dataset at the beginning of thepandemic as many outbreaks occurred in off-campus resi-dence houses, which were analyzed for patterns [13].•Dataset on contact tracing : Each actionable positive testresult generated a ticket, which is entered into a Sales-Force(TM) dataset of tickets. The metadata associated witheach ticket included a unique ticket identifier, the personwhose close contact this is, the person who is the close con-tact, both their information, the time and result of the test,whether that person had symptoms, whether that person isan OSU affiliate, etc. This dataset was important through-out the pandemic, since these tests and contacts were thefocus of most of the analyses. Also, this dataset containedepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamFigure 2. Data flow in the OSU COVID-19 monitoring pipeline.data on positive tests even if they were not present inthe gold test data table. This is because while the goldtable only recorded tests that were administered by theuniversity, the SalesForce(TM) tickets datasets containedinformation on other tests, some outside the university, aslong as they were positive. This dataset was thus a goodsource for absolute number of positives in the universitycommunity, but not very good for computing rates, due tothe absence of a denominator.•Datasets on class enrollment : When the university re-opened for the Fall after the summer of 2020, a lot of classeswere online, some were hybrid, and few were in-person.It was important to know if there was additional risk ofinfection for students enrolled in classes conducted in per-son, and decisions had to be made to combat the risk andspread of infections. The class enrollment datasets werekey in this effort.•Datasets on vaccination : Two datasets were maintainedthat contained vaccination information, one for studentsand one for employees (including staff). Although con-taining the same information in essence, the two werestructured differently. The tables for students containedtwo date variables, one denoting the date of dose received,and the other indicating the date when the individual be-comes fully vaccinated according to CDC guidelines. It alsohad variables corresponding to whether the individual hada vaccination exemption, whether the dose was CDC ap-proved, the CDC code (e.g, 208 for Pfizer) [ 2], whether theshot was a booster, etc. On the other hand, the employeevaccination table contained columns on first vaccinationdate, second vaccination date, up to seventh vaccinationdate and the provider information for each in additionto the exemption and booster indications. Thus, the dataanalysis needed to produce the same results from the twotables needed to be different.The initial daily analysis included breakdown of test posi-tivity rate in each of the residence halls, between demograph-ics, majors, and campuses. This was for internal consump-tion, pattern identification, and insight derivation. Much ofthis data and the derived analysis was private and was notmade public. The results that did make it to the dashboard[3], as shown in Figure 1, were the aggregate and summarynumbers on reproduction number, which is a standard epi-demiological metric [ 7], the daily number of cases, the 7-dayaverage, etc.1. Identification of close contacts of studentsresiding in dorms was a large part of the daily analysis andthe gold datasets were utilized to that end to produce a listof roommates and suitemates. A concise description of theanalysis performed was first published in an initial report [ 4]in October 2020 and updated in a second report[ 5] in March2021 by the CMT.5 ChallengesThe novelty, scale, and duration of the recent and ongoingpandemic were major challenges. Data collection, manage-ment, and analysis pipelines at this scale had no modernprecedent and had to be designed as they were beginning tobe used. Moreover, the timelines were drastically compressedand the requirements initially were changing frequently. Inaddition, some areas, such as close contacts or attendance ofevents, lacked data collection, and some critical data streams,including off-campus testing, were initially completely ab-sent. Further, as most teams around the world, we initiallylacked the full understanding of how to translate the ques-tions into data and how to prioritize the variables and theanalysis for decision support, particularly in the context ofhuman behavior. Below are some of the issues that posedsignificant challenges to the team.1The dashboard was awarded the A+ rating and selected as the best COVID-19 university dashboard by the “We Rate Covid Dashboards" panel of aca-demics [1]Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA5.1 Data cleaningThe data was collected from numerous sources, some ofwhich were manual entries and consequently had unavoid-able human error. For example, a table of people in the data-base had the OSU unique identification (name.#) as the pri-mary key, and the table of test results was supposed to havethe same as foreign key. Typographical errors or null valuesin this identifier column resulted in our inability to corre-spond a test to an individual, causing a non negligible shiftin the summary statistics. Once the problem had been iden-tified, there was joint effort to clean it up, combining morethan four data streams and reducing the number of uniden-tified tests to a number that would not change the inference.Yet, there were still a few individually unidentifiable entriesin the datasets, albeit not a high number to raise a concern.Minimizing manual entry to data sources can reduce suchissues by a considerable amount.A similar problem was found in the table for employeevaccination records, with clearly wrong dates of doses. Whilemost were due to errors, in some cases, employees were actu-ally part of vaccination trials and had received a dose beforeany vaccination received emergency use authorization orapproval for distribution to the general public. These caseswere indistinguishable from the erroneous cases withoutcareful manual investigation and knowledge of the regula-tory frameworks and timing of numerous vaccine candidatesfrom all over the world.One of the challenges that the team immediately encoun-tered while using demographic data was that there were anumber of similar datasets, curated by different organiza-tions at OSU, and used for different operational purposes.Re-purposing these for COVID-19 demographics analysisrequired that specific datasets and methodologies were em-ployed for consistency. Part of the Human Infrastructurethat was critical here were experts of the use of these legacydatasets to be able to share what nuances may have beenencoded in the data, and to help determine the least wrongdatasets and methods to use. This investigation eventuallyled to the creation of the "gold" datasets, which were sonamed because they were the COVID project’s Gold Stan-dard demographic associated with an individual or test.These examples illustrate the need for expert data curation,close scrutiny of analysis outputs that consumed these datasources, efforts to minimize manual data entry, and for closecollaboration with domain experts at every step.5.2 Data storage, backup, documentation, andrecoveryThe volume of data generated by testing mandates as well asvoluntary testing required careful consideration of large, yetquickly accessible and continuously backed up data storage.The ability to look up prior data was critical to understandingtrends and the dynamics of trends, as well as comparing theoutcomes of various past decisions. For continuously chang-ing data, such as the daily updated test data, it is needed tomaintain regular snapshots, checkpoints, and versions. Thisaspect was not fully appreciated initially and required sig-nificant efforts to redesign data architecture. We maintainedtwo ‘gold’ datasets, one corresponding to people and demo-graphics and one corresponding to tests’ metadata. Thesederived datasets were cleaned and organized to our stan-dards that would be the basis of further analysis. This cutdown on the work of individual analysts so that those clean-ing/organization steps would not need to be repeated. The‘gold’ data of people, consisting of faculty, staff, students,and everyone else affiliated in some way with the university,updates significantly every semester overwriting previousdata in the database (S3 environment). We would save a snap-shot of the data every semester, but unfortunately initiallythe snapshots were taken towards the end of the semesterswhen students had already started leaving the campus. As aresult of this, recently when we wanted to get a time seriesof positivity rates in residence halls, it was different fromthe original since we do not have the correct denominator.Recovering this information is possible, but requires integra-tion of other data sources, demanding significant investmentof resources, effort, and time. Majority of the people whowere part of the university supporting the CMT and were re-sponsible for setting up the system are no longer working atOSU. Moreover, early in the reopening of the university, theprimary focus was on managing the pandemic and bringingdown the positivity rate, and detailed documentation wasnot prioritized.Mid semester migration from one homegrown case datamanagement solution to an outside vendor was a major issuethat required major investment and retraining and we arecontinuing to deal with this today from a data and analysisperspective. Roughly from August 2020 to November 2020,we had our positive test (case) data ingested and case inves-tigation/contact tracing notes stored in a secured instanceof a HelpSpot database integrating in some instances withREDCap surveys and pushing out to several communicationplatforms, but later we shifted to a Salesforce Health Cloudbuild, which assisted with future testing data variations,vaccine information, as well as some automatic remindercommunications. The data had been migrated from the oldtable to the new one in theory, but in part user generated het-erogeneity, as well as version control issues in the HelpSpotsource data meant there continued to be gaps in the dataingested by Health Cloud (Salesforce) which do not have sim-ple workarounds for analysis of all variables. We maintainseveral tables for the test information storage, but there areinconsistencies across those tables. More than one tables ex-ist mainly because we derived simpler versions of tables withmany columns that are not relevant for day-to-day analysis.One of the (intermediate) mother tables recently had oneof its very important columns (the test specimen collectionepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quamtime/date column) dropped from an integration during anupdate, and it should have been okay to just look it up ina derived or other related testing table had there not beenmajor differences in the number of entries in the others.The IT organization at OSU, then known as the Officeof the CIO (OCIO) had embarked on a project prior to theCOVID epidemic to move OSU Enterprise data off premisesand onto Amazon Web Services (AWS). AWS was the obviouschoice as the data storage platform, as much of the data werealready present on the platform, and tools such as AmazonAthena were able to provide a layer of data abstraction sothat disparate datasets could be queried in a consistent man-ner. That OCIO project to house these data in a consistentmanner was fortunate; it would otherwise have added anadditional layer of processing to export and synthesize datafrom various legacy systems. The other major considera-tion is that there are significant costs of using a commercialcloud service. While these were covered in part by the OCIOproject, additional data storage for COVID data and the useof AWS tools such as Athena were incurred by the COVIDproject.5.3 Data governance and ethical considerationsThe university has a complex set of data governance regula-tions as do individuals’ private health information, whetherused in the healthcare or public health applications. Whilespecial authorization was granted to use some of the datain the pandemic emergency, security and privacy remainedstrict requirements. Each team member had training in han-dling secure and private data.In addition to the standard data governance issues, deal-ing with the high resolution personal data has its own setof ethical issues. Ultimately, the main question was: what isthe benefit of using a particular data source or performing aparticular analysis and would it change the decisions or thepandemic dynamics? If so, was it necessary to use individualand identifiable data for decision making or could aggregateor coded information have similar utility? For example, whileit is within the rights of the university to use the WiFi accesspoint information to “follow" an individual or to understandwho is within the same room, such information has a high‘icky factor’ and should be used sparingly. Moreover, whileinitially it seemed that WiFi data would provide a good proxyfor contact tracing, it turned out that the resolution of thedata did not correspond well to the physical definitions of acontact. Ultimately, it was decided to use WiFi data in aggre-gate to assess population movements rather than individuals’proximity to other individuals. For example, WiFi data wasused to estimate the number of students leaving campus overthe weekend or the number of students present in an “inperson" classroom. Moreover, the aggregate trends proved tobe much more robust than the individual-based analysis andwere significantly less time consuming. Additionally, adher-ence to the current applicable statutory guidelines for caseinvestigation, subsequent case management, and/or contacttracing may require some variation depending upon indi-viduals’ occupation, travel history, personal risk factors, im-munocompetence, vaccination status, which could includecertain specific preexisting conditions, medications, clini-cal care received, viral (variant/sub-variant) lineage, and/ordisease severity. However, specific individuals’ health infor-mation related to their experience with COVID-19 wouldlargely not meaningfully determine macro-level preventionpolicy or interventions in the university context indepen-dently from aggregate trends and information in the widerpublic health policy guidance, which are separately informedby individuals’ public health, laboratory testing and clini-cal health records. Therefore, particularly those sensitiveindividual level data, especially health data were collectedand subsequently shared only to the extent they would have‘meaningful use’, within the data user groups’ spheres ofcontrol, stated goals, and purview (i.e. healthcare providerswould have access to information relevant for managingpatient care; public health authorities would have access toinformation relevant to determining specific application ofdisease management protocols for individuals and/or groups;occupation health, workplace, and student life safety per-sonnel would have limited access to information relevantto adherence with applicable disease prevention laws andpolicies aimed at risk reduction, such as adherence to testing,vaccination, and isolation/ quarantine requirements in someinstances).6 Takeaways6.1 Behavior over analyticsThe main takeaway of our data-supported pandemic monitor-ing framework is the same as the main takeaway for dealingwith the COVID-19 pandemic world-wide: ultimately, themain determinant of the success of the system hinges onmodifiable human behavior, rather than the sophisticationof the analysis. No improvement in the accuracy of the anal-ysis of the effect of masking in a given setting (i.e. library,classroom, laboratory, or healthcare setting) is meaningfulif people would not (continue to) comply with an indoormask mandate. Similar limitations became apparent withboth pharmaceutical and non-pharmaceutical interventions,even as evidence increasingly substantiated benefits and newsub-variants emerged, populations’ apparent risk tolerancegrew and spread.6.2 Communication is keyWorking with a team this large, with people from vastlydiverse backgrounds, communication between the teamsbecomes an essential component. A major part of the anal-ysis was being carried out by graduate student employees,who were sometimes not aware of things like floor struc-ture in dorms, testing protocols, vaccination mandates, etc.,Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAwhich were important analysis components. Similarly, themodelling team was involved in building risk models, mod-els for testing strategy development, etc. that relied on do-main knowledge outside of mathematics or computer science.Clearly, experts in every relevant domain (epidemiology,public health, student residence life, university logistics andoperations, etc.) need to be constant partners in the analysis.6.3 Equity Considerations and singling outdemographic groupsWhen patterns appear to be emerging within specific groupsor sub-demographic, there may be an equity oriented op-portunity for targeting or strengthening an intervention butthere may also be a bias in the observed signal. One groupmay in fact be more often in situations involving exposure toinfectious persons, or engaged in more risky behavior thanothers, as we occasionally discovered from data analysis.However, available policy level changes may not have beenfeasible solutions and were not always ultimately enacted.What we started to see in the data raised questions on ethicsand trustworthiness of data enabled interventions, withoutcontext or corroboration. Some solutions aimed to addressone groups perceived or real deficiency in access to resourcesor excessive exposure could foster stigma, or loss of otherresources in unanticipated ways. After careful consideration,it was agreed that singling out a group was often not enoughof a value addition or could do more harm than good. In somecases, trends observed initially in one population or groupwere indicative of larger trends that could be addressed bypolicy shifts relevant to the whole community, which wouldaddress both the observed inequity and mitigate for knownunintended consequences.6.4 Micropatterns significant, but not usable inhindsightThe reflections on the decisions made over the course of threeyears showed that the micropatterns and the microtrendsobserved in the data had little to no effect on those decisions.Observations that a certain subgroup engaged in activitiesthat increased the risk of the spread of the infection did notprompt the authorities to take measures to shut down thoseactivities in many cases because it was either not cost effec-tive or unethical to do so. These data nuances did provideinformation but it was not actionable. In retrospect, however,the information’s main utility was in the fact that no singlecritical subgroup was the key to the solution. The scale of thephenomena did not lend itself to a single pathway of solutionor a single target group. Patterns that we learned in settingslike an early long term care facility were also observed laterin dorms, sorority and fraternity houses and athletics teamsand they led to better population level responses. A goodexample would be the limitations of certain kinds of testsfor transmission suppression. The Big10 testing programinvolved daily testing of athletes during their competitionseason, given that team members were often unable to maskand physically distance in some sports. Unfortunately, whentransmission started to increase rapidly in late autumn 2020as sports teams re-started their compressed seasons, evendaily testing with rapid results was insufficient to suppresstransmission, largely because the particular test used did notdetect all infectious individuals immediately. By the timeone tests positive on an antigen test, like those in use at thattime, a person may have already been infected and infec-tious for a few days, meaning potentially exposing othersand continuing transmission chains. Antigen tests are use-ful for rapid diagnosis particularly when symptomatic butare not always ideally suited for early enough detection toreduce spread in a serial testing model. OSU opted for devel-oping and deploying swift, minimally invasive (saliva based),highly specific, highly sensitive, PCR testing, shown to beable to detect pre-symptomatic and asymptomatic infections(eventually even processing results with its own PCR testingand sequencing lab capable of thousands of tests per day).Although they were not as fast as antigen tests, the aver-age turnaround time was less than 24 hours during much ofthe semesters’ most populated period. This was a scenariowhere tracking a micropattern in a particular well-observedand well-resourced group gave us really good information ofwhat and how we should be optimizing testing resources andworking within their limitations with the larger universitycommunity’s population.6.5 Data infrastructureThe overall data infrastructure consists of cyberinfrastruc-ture (compute, storage, networking, cloud and web services),information infrastructure (data and metadata management,search, archiving, cataloging, and digital services), and ana-lytics infrastructure (data integration, harmonization, andanalysis). The large volume of data collected, collection rate,distributed team setting, potential errors, inconsistencies,and variations in reporting standards, and changing objec-tives all strained and challenged existing data infrastructureat OSU and necessitated expansion of that infrastructure.Moreover, COVID-19 management provided a great case-study and emphasis on the fact that data infrastructure inte-grates cyber-, information, and data services infrastructuresthrough human infrastructure . Building the human infras-tructure is both the most critical aspect and the hardest toimplement of any data infrastructure. We have seen person-nel migrate out of the team, and the university, and whenthat happens, they take institutional knowledge with them.Replacing personnel in such a fast paced environment entailsa lot of rigorous training that newer team members have togo through within a very short period of time. Even afterbeing on board, it takes significant time to bring them up tospeed, which often creates a bottleneck.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam6.6 ScaleThe sheer volume of COVID-19 data generated from testingand vaccination overwhelmed existing data managementsystems of the university as well as the state. Scaling up datainfrastructure and analytical capabilities to handle large-scale data collection and analysis proved to be a significantchallenge, but one that can definitely be overcome.7 Comparison between similar systems inplace nationwideThe COVID-19 pandemic was monitored worldwide, andany attempt to track rates or contain the outbreaks had toinvolve systems governing huge amounts of data. Among thehumongous number of research papers out there utilizingthe pandemic data, very few of them talk about the nuancesof the data collection and storage mechanisms deployed. Forexample, a paper [ 18] from University of Michigan talksabout collecting environmental surveillance data in orderto estimate infection risk. This direction of research andanalysis was popular in a lot of organizations and was a goodmeans of estimating risk of infection within the campus fromsources like dust and sewage water, including OSU [ 6,14,15].Another paper [ 11] discusses digital health research andtracking in general, but in the light of the pandemic and howit impacted practices. Their concerns are very similar to ours,but unlike their generic view, we provide a complete storyof a real experience with a series of issues faced and tackledin an urban institute.8 ConclusionWe hope that the COVID-19 pandemic was a one-off uniqueevent, never to be repeated. Yet, we should be prepared torespond to a similar event by learning from our experience.We hope that the OSU CMT work presented here can servenot only as a blueprint, but as a guide for considerations,priorities, and potential pitfalls, should the response at thisscale be ever needed.AcknowledgmentsWe would like to acknowledge the work of many people whohave contributed to the effort of enabling the data driven ap-proach to monitoring and managing the COVID-19 pandemicat the Ohio State University: the entire Comprehensive Mon-itoring Team (CMT), Case Investigation and Contact TracingTeam, CMT student analysts, CMT/IDI Modeling Team, Ap-plied Microbiology Services Lab, Testing Operations Team,Student Life Isolation and Quarantine Team, Student HealthServices, Employee Health Services, local and state publichealth authorities, dashboard developers, and the OTDI team,including D&A data engineers, data governance team, net-work administrators, and enterprise security.References[1]A deeper dive into Ohio State’s top-rated COVID-19 testing datadashboard. https://news.osu.edu/a-deeper-dive-into-ohio-states-top-rated-covid-19-testing-data-dashboard . Accessed July 31, 2023.[2]IIS: HL7 Standard Code Set Mapping CVX to Vaccine Groups. https://www2.cdc.gov/vaccines/iis/iisstandards/vaccines.asp .[3]Safe and Healthy Buckeyes COVID-19 Dashboard (archived). https://safeandhealthy.osu.edu/dashboard . Accessed July 31, 2023.[4]Safe Campus Scientific Advisory Subgroup Recommendations.https://safeandhealthy.osu.edu/sites/default/files/2020/07/safe-campus_6.30.pdf . Accessed July 31, 2023.[5]The Ohio State University Comprehensive Monitoring Team — Report2. March 2, 2021. https://safeandhealthy.osu.edu/sites/default/files/2021/03/the_ohio_state_university_comprehensive_monitoring_team_-_report_2.pdf . Accessed July 31, 2023.[6]Tracking COVID-19 with dust at the ohio state university.https://sapac.illumina.com/company/news-center/feature-articles/tracking-covid-19-with-dust-at-the-ohio-state-university.html .Accessed July 31, 2023.[7]Achaiah, N. C., Subbarajasetty, S. B., and Shetty, R. M. R0 andre of COVID-19: Can we predict when the pandemic outbreak willbe contained? Indian journal of critical care medicine : peer-reviewed,official publication of Indian Society of Critical Care Medicine 24 , 11(Nov. 2020), 1125–1127.[8]Centers for Disease Control and Prevention . COVID-19Overview and Infection Prevention and Control Priorities in non-U.S. Healthcare Settings. https://www.cdc.gov/coronavirus/2019-ncov/hcp/non-us-settings/overview/index.html .[9]Dallal, A. A., Dallal, U. A., and Dallal, J. A. Positivity rate: anindicator for the spread of covid-19. Current Medical Research andOpinion 37 , 12 (2021), 2067–2076.[10] Doraiswamy, S., Mamtani, R., and Cheema, S. An in-depth analysisof 10 epidemiological terminologies used in the context of covid-19.SAGE Choice 50 , 6 (Dec. 2021), 819–826.[11] Dron, L., Kalatharan, V., Gupta, A., Haggstrom, J., Zariffa, N.,Morris, A. D., Arora, P., and Park, J. Data capture and sharing in theCOVID-19 pandemic: a cause for concern. The Lancet Digital Health 4 ,10 (Oct. 2022), E748–E756.[12] Dusen, J. V., LeBlanc, H., Renninger, N., Nastas, N., Panescu, J.,Smith, J. W., Sovic, M. G., Williams, A., Quam, M., Faith., S., andDannemiller, K. Identification of sars-cov-2 variants in indoor dust.InAssociation of Environmental Engineering and Science ProfessorsResearch and Education Conference 2023 (2022).[13] Krantz, M., Bleichrodt, A., and Quam, M. Housing diversityand sars-cov-2 transmission in a university setting. In QuantitativeMethodology Center 2022 Conference: Why Quantitative Research Mat-ters(2022).[14] Renninger, N., Nastasi, N., Bope, A., Cochran, S. J., Haines, S. R.,Balasubrahmaniam, N., Stuart, K., Bivins, A., Bibby, K., Hull, N. M.,and Dannemiller, K. C. Indoor Dust as a Matrix for Surveillance ofCOVID-19. ASM Journals 6 , 2 (Apr. 2021).[15] Wascher, M., Klaus, C., Alvarado, C., Bope, A., Panescu, J., Quam,M., Dannemiller, K., and Joseph, T. A mechanistic modeling andestimation framework for environmental pathogen surveillance. InSociety of Mathematical Biology Meeting, Mini-Symposium (2022).[16] Wascher, M., Schnell, P. M., Khudabukhsh, W. R., Quam, M., Tien,J. H., and Rempała, G. A. Monitoring SARS-COV-2 transmission andprevalence in population under repeated testing. medRxiv (2021).[17] World Health Organization . Clinical management of COVID-19.https://www.who.int/teams/health-care-readiness/covid-19 .[18] Zhang, X., Wu, J., Smith, L. M., Li, X., Yancey, O., Franzblau, A.,Dvonch, J. T., Xi, C., and Neitzel, R. L. Monitoring SARS-CoV-2 inair and on surfaces and estimating infection risk in buildings and buseson a university campus. Journal of Exposure Science and EnvironmentalEpidemiology 32 (2022), 751–758.
bFsZeXosPQF
Review of Data Collection, Management, Analysis and Decision Support During COVID-19: A Retrospective from The Ohio State University
3: Marginally above acceptance threshold
Summary: This paper discusses the large undertaking of collecting, processing, and reporting COVID-19 data from the Ohio State University. This paper makes note of the challenges and missteps faced in data processing, and the lessons learned from this experience during the pandemic. Clarity: This paper was clear and easy to follow. To improve upon the clarity, I would suggest the following: --Ensure that the “aims” listed are discussed in later sections in the paper. The first aim of “tracking the positivity rate” is not mentioned at any other point in the paper. It is not clear from this paragraph which positivity rate is being tracked (university affiliates?), and whether any weighting scheme was applied to the data. Similarly, the second aim is “contact tracing” however there is no further discussion of how contact tracing was done and/or recorded. It is unclear whether this is really an aim of the data component, or whether this is considered too far downstream. If both of these were aspects of the data framework, they should be expanded on in the implementation section. --In the figure 2 schematic, it would be helpful to highlight the data processing / management steps or programs used to convert from the gold test results to the dashboard and contact tracing app. Additional details could be added to this figure. --The term, “human infrastructure” is bolded in section 7.4, yet this term is not defined. It would be beneficial to define what this term means in the context of this paper, as this term may not be familiar to many readers. Minor comments on clarity: --In the “implementation section” and in figure 2, a number of abbreviations are used that are never spelled out. Writing out these abbreviations would clarify the paper and the data schematic (e.g., IDM, SIS, STFP, TDAI). --The discussion surrounding issues with salesforce data is unclear. The authors mention “user generated heterogeneity” and “version control issues”, however the link between those issues and what is causing gaps in data is not fully apparent. --Figure 1 is not mentioned at all in the paper. It would be useful to include a discussion of who is using / viewing the dashboard and how frequently it was used. That would give an indication of how the data was being used by the community / decision makers at OSU. Originality: The work is original in that it is the only paper to describe the data-driven processes occurring at the Ohio State University. However, many of the points made are not unique, and seem to highlight issues with this data management system. Lessons such as the need to “minimize manual data entry”, work with experts in “every relevant domain”, and following ethical guidelines regarding data privacy are not ideas original to this project. To highlight the originality of this work, it would be helpful to have a small review of literature section that discusses how this project improves or differs from similar undertakings at large universities. Significance: The significance of this paper could be improved by including more actionable messages to future data systems and teams. Significance could also be improved by noting how this data was used for decision making. One of the goals listed in section 3 is to “support daily policy decisions”. However, throughout the paper there is little indication of how the data that has been acquired, processed and presented informs decision making. Including additional examples of how this data was used would be very beneficial. The significance would also be boosted by discussing how individual COVID-19 testing data was integrated (if at all) with wastewater data and/or genomic data to inform university policy. Pros: --Well written paper --Concisely and clearly presents aims of a data-driven framework --Clearly explains many of the pitfalls that can occur in data management, and acknowledges that during the pandemic, some best-practices (such as recording all steps along the way) were not followed due to the need to provide numbers to decision makers. -- Provides nice examples of when microtrends were useful. Cons: --Paper does not provide many actionable steps for using data, or a data-driven approach. Instead, rather broad generalizations are made as to what would be useful (e.g., less manual data entry). --The implementation section is not informative enough. It would be beneficial to provide more information about the programs used for sorting data, and for moving from one health system to another. --The figures presented seem disconnected from the text of the paper. Figure 1 should be discussed in the paper, and figure 2 should be expanded to be more descriptive.
3: The reviewer is fairly confident that the evaluation is correct
BNU_N-7EIR
KDD.org/2023/Workshop/epiDAMIK
2023
Pandemic Data Collection, Management, Analysis and Decision Support: A Large Urban University Retrospective
["Namrata Banerji", "Steve Chang", "Andrew Perrault", "Tanya Berger-Wolf", "Mikkel Quam"]
The COVID-19 pandemic has disrupted the world. During this crisis, data has emerged as a critical resource for understanding, monitoring, and mitigating the impact of the disease. We present The Ohio State University's data-driven framework for comprehensive monitoring of the COVID-19 pandemic. We discuss the challenges associated with data collection, investigate the roles and limitations of data analysis in supporting intervention choice and implementation strategies amid the complexities of the pandemic as it unfolded. Balancing privacy, consent, and transparency and ensuring the responsible handling of sensitive information is crucial in maintaining public trust. We examine privacy-preserving techniques, ethical frameworks, and legal regulations aimed at safeguarding individuals' rights while harnessing the power of data. In our experience, conscientious data architecture provided a foundation for meaningful ethical applications of data products, which not only helped mitigate the current crisis, but also can provide valuable insights for better addressing future public health emergencies.
["datasets", "public health", "data management", "ethics"]
Pandemic Data Collection, Management, Analysis andDecision Support:A Large Urban University RetrospectiveNamrata [email protected] Ohio State UniversityColumbus, Ohio, USASteve [email protected] Supercomputer CenterColumbus, Ohio, USAAndrew [email protected] Ohio State UniversityColumbus, Ohio, USATanya Y. [email protected] Ohio State UniversityColumbus, Ohio, USAMikkel [email protected] Ohio State UniversityColumbus, Ohio, USAFigure 1. Archived OSU Safe & Healthy COVID-19 Dashboard for November 2, 2020AbstractThe COVID-19 pandemic has disrupted the world. Duringthis crisis, data has emerged as a critical resource for un-derstanding, monitoring, and mitigating the impact of thedisease. We present The Ohio State University’s data-drivenframework for comprehensive monitoring of the COVID-19pandemic. We discuss the challenges associated with datacollection and investigate the roles and limitations of dataPermission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).analysis in supporting intervention choice and implemen-tation strategies amid the complexities of the pandemic asit unfolded. Balancing privacy, consent, and transparencyand ensuring the responsible handling of sensitive infor-mation is crucial in maintaining public trust. We examineprivacy-preserving techniques, ethical frameworks, and legalregulations aimed at safeguarding individuals’ rights whileharnessing the power of data. In our experience, conscien-tious data architecture provided a foundation for meaningfulethical applications of data products, which not only helpedmitigate the current crisis, but also can provide valuable in-sights for better addressing future public health emergencies.CCS Concepts: •Information systems →Database ad-ministration ;•Applied computing →Health care infor-mation systems .Keywords: datasets, public health, data management, ethicsepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamACM Reference Format:Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam. 2023. Pandemic Data Collection, Man-agement, Analysis and Decision Support: A Large Urban Univer-sity Retrospective. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 8 pages.1 IntroductionThe onset of the COVID-19 pandemic in early 2020 was oneof the most significant and life changing events for everyoneon the planet, impacting everything from small businessesto entire countries. In case of educational institutions, the in-definite suspension of classes, upending of every traditionalaspect of academic and student life, and the transition tovirtual education was stressful for students, staff, and facultyalike. The Ohio State University (OSU), a large urban edu-cational institution, undertook a massive policy response tosupport the continuing function of the university by moni-toring and managing the dynamics of the pandemic on andaround its campuses. Putting together a coalition of epidemi-ologists, data scientists, public health policy makers wasonly the first step of what shaped up to be at least a threeyear marathon. Data was at the center of the whole process,both as the decision enabler and as the product of many ofthe contributing efforts. To make data actionable requiredthe work of many teams and several iterations of cleaning,analysis and inference, and visualization. In this paper, wepresent the overall data-focused aspects of the process, high-lighting the achievements and the hindrances, as well asthe major takeaways, so that we are better prepared for fu-ture public health emergencies or other large scale collectiveresponses. This manuscript, besides serving as a piece ofinstitutional memory, communicates in detail the variousobstacles encountered in the handling of the mammoth datafor the data science community to be aware of. Among themain takeaways we consider the effectiveness of the datadriven approaches for managing the pandemic response, theneed for an institutional data infrastructure, and the impor-tance of a well organized team of experts and professionalsworking together towards a well-defined goal.2 OverviewThe Ohio State University stood up the Comprehensive Mon-itoring Team (CMT) [ 4] to include a framework of supportfor data driven decisions for pandemic management, includ-ing robust case finding (via serial mass administration ofindividual PCR tests with rapid in-house processing), lo-cally administered isolation of cases, contact tracing andquarantine of close contacts, as well as data integration, anal-ysis, modelling, risk evaluation, policy recommendations,and intervention implementation based upon knowledge de-rived from individual case management, subsequent viral(genomic) sequencing, large scale syndromic surveillanceand evidence of environmental (wastewater and dust) shed-ding [ 6,12,14,15]. Here we present the core of the datacomponent of this system that integrated data from varioustesting centers, conducted daily analyses, and representeddata in formats usable by the leadership to support bothindividual level contact tracing and the university’s policyresponse to the public health emergency. In the coming sec-tions, we discuss the goal of setting up such a system, theimplementation pipeline, data sources and some of the chal-lenges and takeaways.3 GoalsBuilding and maintaining such a huge framework and em-ploying a whole workforce including faculty, students, health-care workers consumes university resources at a large scale.The goals were the result of several rapid iterations of con-vergent conversations between the university administrationand members of the CMT, as well as the consultations withexternal experts. The specific aims of the data componentsof the framework were as follows:•Tracking the positivity rate. Positivity rate or testingpositivity rate, defined as the percentage of tests reportedthat are positive [ 10], emerged early in the pandemic asthe agreed upon indicator of the state of the populationand the basis for comparing different populations [ 9]. Weused the positivity rate, throughout the monitoring processdue to a number of reasons, one of them being that thispercentage (sometimes a fraction) was the most expressiveand conveyed a more complete story than other measuressuch as absolute number of positive cases. It is true that100% of the university population was not being tested,because there were exemptions (medical and otherwise)and non-compliants, but we had the data necessary to de-termine exactly what fraction of the population was beingtested. This was the best metric that we could monitorfrom the data and information available to us at the time,and it never became a cause for concern.•Contact tracing. Removal of positive and potentially pos-itive cases from the population is key for suppressing thespread of the virus [ 8,17]. It was necessary to providecontact information for people who tested positive and toidentify and contact their close contacts in order to isolateand quarantine them, respectively.•Understanding the micro trends and risks based onevents. To understand the dynamics, the risks, and theimplications of the pandemic for various subpopulations itwas necessary to provide the ability to zoom in on specifictime intervals and subgroups in the data. Examples of thequestions asked include: How does fall break or Halloweenbehaviour change/impact infection rates? Is there an in-creased risk of students in a 4-person suite over a 2-personPandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAdorm room? How do the risks associated with in-personclasses compare with hybrid or remote classes?•Supporting daily policy decisions of a large urbanuniversity. Daily decisions supported by data includedthe choice of a testing strategy and protocol, transition tohybrid vs online only classes, occupancy in classrooms,vaccination and masking requirements, etc. Having accessto the right data was essential. The testing protocol [ 3,16] was more strict in the early days of the pandemic,requiring all students who live in residence halls or whohave at least one in-person class to test at least once everyweek. The requirements were relaxed in the subsequentsemesters. Testing mandates were also in place aroundholidays, for example, students were required to test beforea Thanksgiving break and after. The WiFi data was oftenutilized to get a sense of how many students were stillresiding in the dorms over the break, and how many wenthome.•Reducing burden in the wider population. OSU Colum-bus campus is a large urban campus with highly permeableboundary in the center of a city. In order to contain thepandemic, the infection rates needed to be controlled bothon and around campus. Moreover, the university sought tomitigate the export of infections to communities beyond itscampuses. College students mix with the city populationand visit their family over academic breaks, potentially in-creasing the risk of transmission to vulnerable communitymembers. Recommending and at times requiring testingbefore the academic breaks was one such measure takento reduce the burden on vulnerable immuno-compromisedpopulation outside the university.4 ImplementationOSU has 68,000 students, 12,000 of which reside in residencehalls during a regular year. During the pandemic, about 8,000students were in residence halls and were required to testweekly. Additional students, faculty, and staff were testingvoluntarily. At its peak, more than 30,000 tests per weekwere processed.Multiple teams across Information Technology support,Student Life, Translational Data Analytics Institute (TDAI),Infectious Disease Institute (IDI), University Medical Centers,College of Public Health, and many more were responsiblefor standing up a system that would be in place for at leastthe next 3 years. The data environment was a secure and flex-ible environment that allowed for dynamic data definitionand integration of data from at least 56 sources when it wasintroduced. (The number of data sources grew to over 100by the end of 2022.) Initial data sources included testing datatogether with the administrative data of student information,residence and permanent addresses, demographics, class reg-istration, residence layout, class and college affiliations, WiFiaccess point information, and much more. The pipeline isillustrated in Figure 2 and is described very briefly below.•Primary test data was transmitted into the internal securedata environment via electronic file transfer multiple timesa day.•Additional attributions from other internal OSU systems(Identity management (IDM), Student Information Systems(SIS), Student Life, etc.) were preloaded and updated accord-ing to the system’s change protocol (e.g. each semester).•Test results and internal data were combined into a cohe-sive reusable dataset (AKA the “gold table").•Analysts and dashboard builders utilized a common sourcefor all reports and visualizations.•Data was also sent to Helpspot/Salesforce to support caseinvestigation and contact tracing efforts.4.1 Data description and daily analysisAmong the 50+ tables and views that were maintained onAWS, there were 10-12 datasets, described below, that weremost frequently accessed for daily analysis reports.•‘Gold’ dataset of people : This view is derived from mul-tiple tables, that contain individuals’ unique identifiers,demographic information such as gender, race, ethnicity,age, home and campus address, affiliation with the univer-sity, affiliation with an OSU campus, indicators of whethertheir on or off campus, student housing residence, etc.There are roughly 2.5 million entries in this dataset, withupdates at regular time intervals of changing affiliations,addresses, and other variables.•‘Gold’ dataset of tests : Similar to the gold person table,this is also a derived view of data on tests administered bythe university that combines variables like test providername, test administered time, test result time, test result,type of test conducted, etc. It also contained some of thedemographic information and addresses so that quick re-sults could be obtained by running simple queries, withoutjoining multiple tables.•Dataset on off campus residence housing : This datasetcontains information on what organizations individualsare a member of, whether they are an active member,whether they live in the organization housing, etc. Thiswas a particularly useful dataset at the beginning of thepandemic as many outbreaks occurred in off-campus resi-dence houses, which were analyzed for patterns [13].•Dataset on contact tracing : Each actionable positive testresult generated a ticket, which is entered into a Sales-Force(TM) dataset of tickets. The metadata associated witheach ticket included a unique ticket identifier, the personwhose close contact this is, the person who is the close con-tact, both their information, the time and result of the test,whether that person had symptoms, whether that person isan OSU affiliate, etc. This dataset was important through-out the pandemic, since these tests and contacts were thefocus of most of the analyses. Also, this dataset containedepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamFigure 2. Data flow in the OSU COVID-19 monitoring pipeline.data on positive tests even if they were not present inthe gold test data table. This is because while the goldtable only recorded tests that were administered by theuniversity, the SalesForce(TM) tickets datasets containedinformation on other tests, some outside the university, aslong as they were positive. This dataset was thus a goodsource for absolute number of positives in the universitycommunity, but not very good for computing rates, due tothe absence of a denominator.•Datasets on class enrollment : When the university re-opened for the Fall after the summer of 2020, a lot of classeswere online, some were hybrid, and few were in-person.It was important to know if there was additional risk ofinfection for students enrolled in classes conducted in per-son, and decisions had to be made to combat the risk andspread of infections. The class enrollment datasets werekey in this effort.•Datasets on vaccination : Two datasets were maintainedthat contained vaccination information, one for studentsand one for employees (including staff). Although con-taining the same information in essence, the two werestructured differently. The tables for students containedtwo date variables, one denoting the date of dose received,and the other indicating the date when the individual be-comes fully vaccinated according to CDC guidelines. It alsohad variables corresponding to whether the individual hada vaccination exemption, whether the dose was CDC ap-proved, the CDC code (e.g, 208 for Pfizer) [ 2], whether theshot was a booster, etc. On the other hand, the employeevaccination table contained columns on first vaccinationdate, second vaccination date, up to seventh vaccinationdate and the provider information for each in additionto the exemption and booster indications. Thus, the dataanalysis needed to produce the same results from the twotables needed to be different.The initial daily analysis included breakdown of test posi-tivity rate in each of the residence halls, between demograph-ics, majors, and campuses. This was for internal consump-tion, pattern identification, and insight derivation. Much ofthis data and the derived analysis was private and was notmade public. The results that did make it to the dashboard[3], as shown in Figure 1, were the aggregate and summarynumbers on reproduction number, which is a standard epi-demiological metric [ 7], the daily number of cases, the 7-dayaverage, etc.1. Identification of close contacts of studentsresiding in dorms was a large part of the daily analysis andthe gold datasets were utilized to that end to produce a listof roommates and suitemates. A concise description of theanalysis performed was first published in an initial report [ 4]in October 2020 and updated in a second report[ 5] in March2021 by the CMT.5 ChallengesThe novelty, scale, and duration of the recent and ongoingpandemic were major challenges. Data collection, manage-ment, and analysis pipelines at this scale had no modernprecedent and had to be designed as they were beginning tobe used. Moreover, the timelines were drastically compressedand the requirements initially were changing frequently. Inaddition, some areas, such as close contacts or attendance ofevents, lacked data collection, and some critical data streams,including off-campus testing, were initially completely ab-sent. Further, as most teams around the world, we initiallylacked the full understanding of how to translate the ques-tions into data and how to prioritize the variables and theanalysis for decision support, particularly in the context ofhuman behavior. Below are some of the issues that posedsignificant challenges to the team.1The dashboard was awarded the A+ rating and selected as the best COVID-19 university dashboard by the “We Rate Covid Dashboards" panel of aca-demics [1]Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA5.1 Data cleaningThe data was collected from numerous sources, some ofwhich were manual entries and consequently had unavoid-able human error. For example, a table of people in the data-base had the OSU unique identification (name.#) as the pri-mary key, and the table of test results was supposed to havethe same as foreign key. Typographical errors or null valuesin this identifier column resulted in our inability to corre-spond a test to an individual, causing a non negligible shiftin the summary statistics. Once the problem had been iden-tified, there was joint effort to clean it up, combining morethan four data streams and reducing the number of uniden-tified tests to a number that would not change the inference.Yet, there were still a few individually unidentifiable entriesin the datasets, albeit not a high number to raise a concern.Minimizing manual entry to data sources can reduce suchissues by a considerable amount.A similar problem was found in the table for employeevaccination records, with clearly wrong dates of doses. Whilemost were due to errors, in some cases, employees were actu-ally part of vaccination trials and had received a dose beforeany vaccination received emergency use authorization orapproval for distribution to the general public. These caseswere indistinguishable from the erroneous cases withoutcareful manual investigation and knowledge of the regula-tory frameworks and timing of numerous vaccine candidatesfrom all over the world.One of the challenges that the team immediately encoun-tered while using demographic data was that there were anumber of similar datasets, curated by different organiza-tions at OSU, and used for different operational purposes.Re-purposing these for COVID-19 demographics analysisrequired that specific datasets and methodologies were em-ployed for consistency. Part of the Human Infrastructurethat was critical here were experts of the use of these legacydatasets to be able to share what nuances may have beenencoded in the data, and to help determine the least wrongdatasets and methods to use. This investigation eventuallyled to the creation of the "gold" datasets, which were sonamed because they were the COVID project’s Gold Stan-dard demographic associated with an individual or test.These examples illustrate the need for expert data curation,close scrutiny of analysis outputs that consumed these datasources, efforts to minimize manual data entry, and for closecollaboration with domain experts at every step.5.2 Data storage, backup, documentation, andrecoveryThe volume of data generated by testing mandates as well asvoluntary testing required careful consideration of large, yetquickly accessible and continuously backed up data storage.The ability to look up prior data was critical to understandingtrends and the dynamics of trends, as well as comparing theoutcomes of various past decisions. For continuously chang-ing data, such as the daily updated test data, it is needed tomaintain regular snapshots, checkpoints, and versions. Thisaspect was not fully appreciated initially and required sig-nificant efforts to redesign data architecture. We maintainedtwo ‘gold’ datasets, one corresponding to people and demo-graphics and one corresponding to tests’ metadata. Thesederived datasets were cleaned and organized to our stan-dards that would be the basis of further analysis. This cutdown on the work of individual analysts so that those clean-ing/organization steps would not need to be repeated. The‘gold’ data of people, consisting of faculty, staff, students,and everyone else affiliated in some way with the university,updates significantly every semester overwriting previousdata in the database (S3 environment). We would save a snap-shot of the data every semester, but unfortunately initiallythe snapshots were taken towards the end of the semesterswhen students had already started leaving the campus. As aresult of this, recently when we wanted to get a time seriesof positivity rates in residence halls, it was different fromthe original since we do not have the correct denominator.Recovering this information is possible, but requires integra-tion of other data sources, demanding significant investmentof resources, effort, and time. Majority of the people whowere part of the university supporting the CMT and were re-sponsible for setting up the system are no longer working atOSU. Moreover, early in the reopening of the university, theprimary focus was on managing the pandemic and bringingdown the positivity rate, and detailed documentation wasnot prioritized.Mid semester migration from one homegrown case datamanagement solution to an outside vendor was a major issuethat required major investment and retraining and we arecontinuing to deal with this today from a data and analysisperspective. Roughly from August 2020 to November 2020,we had our positive test (case) data ingested and case inves-tigation/contact tracing notes stored in a secured instanceof a HelpSpot database integrating in some instances withREDCap surveys and pushing out to several communicationplatforms, but later we shifted to a Salesforce Health Cloudbuild, which assisted with future testing data variations,vaccine information, as well as some automatic remindercommunications. The data had been migrated from the oldtable to the new one in theory, but in part user generated het-erogeneity, as well as version control issues in the HelpSpotsource data meant there continued to be gaps in the dataingested by Health Cloud (Salesforce) which do not have sim-ple workarounds for analysis of all variables. We maintainseveral tables for the test information storage, but there areinconsistencies across those tables. More than one tables ex-ist mainly because we derived simpler versions of tables withmany columns that are not relevant for day-to-day analysis.One of the (intermediate) mother tables recently had oneof its very important columns (the test specimen collectionepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quamtime/date column) dropped from an integration during anupdate, and it should have been okay to just look it up ina derived or other related testing table had there not beenmajor differences in the number of entries in the others.The IT organization at OSU, then known as the Officeof the CIO (OCIO) had embarked on a project prior to theCOVID epidemic to move OSU Enterprise data off premisesand onto Amazon Web Services (AWS). AWS was the obviouschoice as the data storage platform, as much of the data werealready present on the platform, and tools such as AmazonAthena were able to provide a layer of data abstraction sothat disparate datasets could be queried in a consistent man-ner. That OCIO project to house these data in a consistentmanner was fortunate; it would otherwise have added anadditional layer of processing to export and synthesize datafrom various legacy systems. The other major considera-tion is that there are significant costs of using a commercialcloud service. While these were covered in part by the OCIOproject, additional data storage for COVID data and the useof AWS tools such as Athena were incurred by the COVIDproject.5.3 Data governance and ethical considerationsThe university has a complex set of data governance regula-tions as do individuals’ private health information, whetherused in the healthcare or public health applications. Whilespecial authorization was granted to use some of the datain the pandemic emergency, security and privacy remainedstrict requirements. Each team member had training in han-dling secure and private data.In addition to the standard data governance issues, deal-ing with the high resolution personal data has its own setof ethical issues. Ultimately, the main question was: what isthe benefit of using a particular data source or performing aparticular analysis and would it change the decisions or thepandemic dynamics? If so, was it necessary to use individualand identifiable data for decision making or could aggregateor coded information have similar utility? For example, whileit is within the rights of the university to use the WiFi accesspoint information to “follow" an individual or to understandwho is within the same room, such information has a high‘icky factor’ and should be used sparingly. Moreover, whileinitially it seemed that WiFi data would provide a good proxyfor contact tracing, it turned out that the resolution of thedata did not correspond well to the physical definitions of acontact. Ultimately, it was decided to use WiFi data in aggre-gate to assess population movements rather than individuals’proximity to other individuals. For example, WiFi data wasused to estimate the number of students leaving campus overthe weekend or the number of students present in an “inperson" classroom. Moreover, the aggregate trends proved tobe much more robust than the individual-based analysis andwere significantly less time consuming. Additionally, adher-ence to the current applicable statutory guidelines for caseinvestigation, subsequent case management, and/or contacttracing may require some variation depending upon indi-viduals’ occupation, travel history, personal risk factors, im-munocompetence, vaccination status, which could includecertain specific preexisting conditions, medications, clini-cal care received, viral (variant/sub-variant) lineage, and/ordisease severity. However, specific individuals’ health infor-mation related to their experience with COVID-19 wouldlargely not meaningfully determine macro-level preventionpolicy or interventions in the university context indepen-dently from aggregate trends and information in the widerpublic health policy guidance, which are separately informedby individuals’ public health, laboratory testing and clini-cal health records. Therefore, particularly those sensitiveindividual level data, especially health data were collectedand subsequently shared only to the extent they would have‘meaningful use’, within the data user groups’ spheres ofcontrol, stated goals, and purview (i.e. healthcare providerswould have access to information relevant for managingpatient care; public health authorities would have access toinformation relevant to determining specific application ofdisease management protocols for individuals and/or groups;occupation health, workplace, and student life safety per-sonnel would have limited access to information relevantto adherence with applicable disease prevention laws andpolicies aimed at risk reduction, such as adherence to testing,vaccination, and isolation/ quarantine requirements in someinstances).6 Takeaways6.1 Behavior over analyticsThe main takeaway of our data-supported pandemic monitor-ing framework is the same as the main takeaway for dealingwith the COVID-19 pandemic world-wide: ultimately, themain determinant of the success of the system hinges onmodifiable human behavior, rather than the sophisticationof the analysis. No improvement in the accuracy of the anal-ysis of the effect of masking in a given setting (i.e. library,classroom, laboratory, or healthcare setting) is meaningfulif people would not (continue to) comply with an indoormask mandate. Similar limitations became apparent withboth pharmaceutical and non-pharmaceutical interventions,even as evidence increasingly substantiated benefits and newsub-variants emerged, populations’ apparent risk tolerancegrew and spread.6.2 Communication is keyWorking with a team this large, with people from vastlydiverse backgrounds, communication between the teamsbecomes an essential component. A major part of the anal-ysis was being carried out by graduate student employees,who were sometimes not aware of things like floor struc-ture in dorms, testing protocols, vaccination mandates, etc.,Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAwhich were important analysis components. Similarly, themodelling team was involved in building risk models, mod-els for testing strategy development, etc. that relied on do-main knowledge outside of mathematics or computer science.Clearly, experts in every relevant domain (epidemiology,public health, student residence life, university logistics andoperations, etc.) need to be constant partners in the analysis.6.3 Equity Considerations and singling outdemographic groupsWhen patterns appear to be emerging within specific groupsor sub-demographic, there may be an equity oriented op-portunity for targeting or strengthening an intervention butthere may also be a bias in the observed signal. One groupmay in fact be more often in situations involving exposure toinfectious persons, or engaged in more risky behavior thanothers, as we occasionally discovered from data analysis.However, available policy level changes may not have beenfeasible solutions and were not always ultimately enacted.What we started to see in the data raised questions on ethicsand trustworthiness of data enabled interventions, withoutcontext or corroboration. Some solutions aimed to addressone groups perceived or real deficiency in access to resourcesor excessive exposure could foster stigma, or loss of otherresources in unanticipated ways. After careful consideration,it was agreed that singling out a group was often not enoughof a value addition or could do more harm than good. In somecases, trends observed initially in one population or groupwere indicative of larger trends that could be addressed bypolicy shifts relevant to the whole community, which wouldaddress both the observed inequity and mitigate for knownunintended consequences.6.4 Micropatterns significant, but not usable inhindsightThe reflections on the decisions made over the course of threeyears showed that the micropatterns and the microtrendsobserved in the data had little to no effect on those decisions.Observations that a certain subgroup engaged in activitiesthat increased the risk of the spread of the infection did notprompt the authorities to take measures to shut down thoseactivities in many cases because it was either not cost effec-tive or unethical to do so. These data nuances did provideinformation but it was not actionable. In retrospect, however,the information’s main utility was in the fact that no singlecritical subgroup was the key to the solution. The scale of thephenomena did not lend itself to a single pathway of solutionor a single target group. Patterns that we learned in settingslike an early long term care facility were also observed laterin dorms, sorority and fraternity houses and athletics teamsand they led to better population level responses. A goodexample would be the limitations of certain kinds of testsfor transmission suppression. The Big10 testing programinvolved daily testing of athletes during their competitionseason, given that team members were often unable to maskand physically distance in some sports. Unfortunately, whentransmission started to increase rapidly in late autumn 2020as sports teams re-started their compressed seasons, evendaily testing with rapid results was insufficient to suppresstransmission, largely because the particular test used did notdetect all infectious individuals immediately. By the timeone tests positive on an antigen test, like those in use at thattime, a person may have already been infected and infec-tious for a few days, meaning potentially exposing othersand continuing transmission chains. Antigen tests are use-ful for rapid diagnosis particularly when symptomatic butare not always ideally suited for early enough detection toreduce spread in a serial testing model. OSU opted for devel-oping and deploying swift, minimally invasive (saliva based),highly specific, highly sensitive, PCR testing, shown to beable to detect pre-symptomatic and asymptomatic infections(eventually even processing results with its own PCR testingand sequencing lab capable of thousands of tests per day).Although they were not as fast as antigen tests, the aver-age turnaround time was less than 24 hours during much ofthe semesters’ most populated period. This was a scenariowhere tracking a micropattern in a particular well-observedand well-resourced group gave us really good information ofwhat and how we should be optimizing testing resources andworking within their limitations with the larger universitycommunity’s population.6.5 Data infrastructureThe overall data infrastructure consists of cyberinfrastruc-ture (compute, storage, networking, cloud and web services),information infrastructure (data and metadata management,search, archiving, cataloging, and digital services), and ana-lytics infrastructure (data integration, harmonization, andanalysis). The large volume of data collected, collection rate,distributed team setting, potential errors, inconsistencies,and variations in reporting standards, and changing objec-tives all strained and challenged existing data infrastructureat OSU and necessitated expansion of that infrastructure.Moreover, COVID-19 management provided a great case-study and emphasis on the fact that data infrastructure inte-grates cyber-, information, and data services infrastructuresthrough human infrastructure . Building the human infras-tructure is both the most critical aspect and the hardest toimplement of any data infrastructure. We have seen person-nel migrate out of the team, and the university, and whenthat happens, they take institutional knowledge with them.Replacing personnel in such a fast paced environment entailsa lot of rigorous training that newer team members have togo through within a very short period of time. Even afterbeing on board, it takes significant time to bring them up tospeed, which often creates a bottleneck.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam6.6 ScaleThe sheer volume of COVID-19 data generated from testingand vaccination overwhelmed existing data managementsystems of the university as well as the state. Scaling up datainfrastructure and analytical capabilities to handle large-scale data collection and analysis proved to be a significantchallenge, but one that can definitely be overcome.7 Comparison between similar systems inplace nationwideThe COVID-19 pandemic was monitored worldwide, andany attempt to track rates or contain the outbreaks had toinvolve systems governing huge amounts of data. Among thehumongous number of research papers out there utilizingthe pandemic data, very few of them talk about the nuancesof the data collection and storage mechanisms deployed. Forexample, a paper [ 18] from University of Michigan talksabout collecting environmental surveillance data in orderto estimate infection risk. This direction of research andanalysis was popular in a lot of organizations and was a goodmeans of estimating risk of infection within the campus fromsources like dust and sewage water, including OSU [ 6,14,15].Another paper [ 11] discusses digital health research andtracking in general, but in the light of the pandemic and howit impacted practices. Their concerns are very similar to ours,but unlike their generic view, we provide a complete storyof a real experience with a series of issues faced and tackledin an urban institute.8 ConclusionWe hope that the COVID-19 pandemic was a one-off uniqueevent, never to be repeated. Yet, we should be prepared torespond to a similar event by learning from our experience.We hope that the OSU CMT work presented here can servenot only as a blueprint, but as a guide for considerations,priorities, and potential pitfalls, should the response at thisscale be ever needed.AcknowledgmentsWe would like to acknowledge the work of many people whohave contributed to the effort of enabling the data driven ap-proach to monitoring and managing the COVID-19 pandemicat the Ohio State University: the entire Comprehensive Mon-itoring Team (CMT), Case Investigation and Contact TracingTeam, CMT student analysts, CMT/IDI Modeling Team, Ap-plied Microbiology Services Lab, Testing Operations Team,Student Life Isolation and Quarantine Team, Student HealthServices, Employee Health Services, local and state publichealth authorities, dashboard developers, and the OTDI team,including D&A data engineers, data governance team, net-work administrators, and enterprise security.References[1]A deeper dive into Ohio State’s top-rated COVID-19 testing datadashboard. https://news.osu.edu/a-deeper-dive-into-ohio-states-top-rated-covid-19-testing-data-dashboard . Accessed July 31, 2023.[2]IIS: HL7 Standard Code Set Mapping CVX to Vaccine Groups. https://www2.cdc.gov/vaccines/iis/iisstandards/vaccines.asp .[3]Safe and Healthy Buckeyes COVID-19 Dashboard (archived). https://safeandhealthy.osu.edu/dashboard . Accessed July 31, 2023.[4]Safe Campus Scientific Advisory Subgroup Recommendations.https://safeandhealthy.osu.edu/sites/default/files/2020/07/safe-campus_6.30.pdf . Accessed July 31, 2023.[5]The Ohio State University Comprehensive Monitoring Team — Report2. March 2, 2021. https://safeandhealthy.osu.edu/sites/default/files/2021/03/the_ohio_state_university_comprehensive_monitoring_team_-_report_2.pdf . Accessed July 31, 2023.[6]Tracking COVID-19 with dust at the ohio state university.https://sapac.illumina.com/company/news-center/feature-articles/tracking-covid-19-with-dust-at-the-ohio-state-university.html .Accessed July 31, 2023.[7]Achaiah, N. C., Subbarajasetty, S. B., and Shetty, R. M. R0 andre of COVID-19: Can we predict when the pandemic outbreak willbe contained? Indian journal of critical care medicine : peer-reviewed,official publication of Indian Society of Critical Care Medicine 24 , 11(Nov. 2020), 1125–1127.[8]Centers for Disease Control and Prevention . COVID-19Overview and Infection Prevention and Control Priorities in non-U.S. Healthcare Settings. https://www.cdc.gov/coronavirus/2019-ncov/hcp/non-us-settings/overview/index.html .[9]Dallal, A. A., Dallal, U. A., and Dallal, J. A. Positivity rate: anindicator for the spread of covid-19. Current Medical Research andOpinion 37 , 12 (2021), 2067–2076.[10] Doraiswamy, S., Mamtani, R., and Cheema, S. An in-depth analysisof 10 epidemiological terminologies used in the context of covid-19.SAGE Choice 50 , 6 (Dec. 2021), 819–826.[11] Dron, L., Kalatharan, V., Gupta, A., Haggstrom, J., Zariffa, N.,Morris, A. D., Arora, P., and Park, J. Data capture and sharing in theCOVID-19 pandemic: a cause for concern. The Lancet Digital Health 4 ,10 (Oct. 2022), E748–E756.[12] Dusen, J. V., LeBlanc, H., Renninger, N., Nastas, N., Panescu, J.,Smith, J. W., Sovic, M. G., Williams, A., Quam, M., Faith., S., andDannemiller, K. Identification of sars-cov-2 variants in indoor dust.InAssociation of Environmental Engineering and Science ProfessorsResearch and Education Conference 2023 (2022).[13] Krantz, M., Bleichrodt, A., and Quam, M. Housing diversityand sars-cov-2 transmission in a university setting. In QuantitativeMethodology Center 2022 Conference: Why Quantitative Research Mat-ters(2022).[14] Renninger, N., Nastasi, N., Bope, A., Cochran, S. J., Haines, S. R.,Balasubrahmaniam, N., Stuart, K., Bivins, A., Bibby, K., Hull, N. M.,and Dannemiller, K. C. Indoor Dust as a Matrix for Surveillance ofCOVID-19. ASM Journals 6 , 2 (Apr. 2021).[15] Wascher, M., Klaus, C., Alvarado, C., Bope, A., Panescu, J., Quam,M., Dannemiller, K., and Joseph, T. A mechanistic modeling andestimation framework for environmental pathogen surveillance. InSociety of Mathematical Biology Meeting, Mini-Symposium (2022).[16] Wascher, M., Schnell, P. M., Khudabukhsh, W. R., Quam, M., Tien,J. H., and Rempała, G. A. Monitoring SARS-COV-2 transmission andprevalence in population under repeated testing. medRxiv (2021).[17] World Health Organization . Clinical management of COVID-19.https://www.who.int/teams/health-care-readiness/covid-19 .[18] Zhang, X., Wu, J., Smith, L. M., Li, X., Yancey, O., Franzblau, A.,Dvonch, J. T., Xi, C., and Neitzel, R. L. Monitoring SARS-CoV-2 inair and on surfaces and estimating infection risk in buildings and buseson a university campus. Journal of Exposure Science and EnvironmentalEpidemiology 32 (2022), 751–758.
LtzEQpWylBm
OSU Covid-19 Data Retrospective
2: Marginally below acceptance threshold
# Quality The paper is well written and provides a comprehensive retrospective of OSU's pandemic response # Clarity The paper seems to offer examples rather than comprehensive descriptions of data. Understandably difficult to cover everything, but in a retrospective like this, comprehensive analysis is going to be more useful. # Originality Similar pandemic response retrospectives exist for other institutions, while interesting seeing OSU's work, originality is low. # Significance While a good snapshot of the work that occurred at a large scale public university, I feel the lack of originality reduces the overall significance. The paper offers unique and detailed insight into the Ohio State pandemic response process and data collection, detailing the successes and failures of different applications and the lessons that the university leadership learned in the application of these policies. The lessons detailed would be applicable to another pandemic situation should one arise, making iterations on this faster and producing more useful insights more quickly. The paper itself, while interesting to read and learn from, lacks large unique insights, rather agreeing with many other retrospectives with minor shifts in lessons and policies. NOTE: it looks like sections 6/7 are incorrectly labeled (section 7 uses a \section{} tag rather than a \subsection{} tag)
3: The reviewer is fairly confident that the evaluation is correct
BNU_N-7EIR
KDD.org/2023/Workshop/epiDAMIK
2023
Pandemic Data Collection, Management, Analysis and Decision Support: A Large Urban University Retrospective
["Namrata Banerji", "Steve Chang", "Andrew Perrault", "Tanya Berger-Wolf", "Mikkel Quam"]
The COVID-19 pandemic has disrupted the world. During this crisis, data has emerged as a critical resource for understanding, monitoring, and mitigating the impact of the disease. We present The Ohio State University's data-driven framework for comprehensive monitoring of the COVID-19 pandemic. We discuss the challenges associated with data collection, investigate the roles and limitations of data analysis in supporting intervention choice and implementation strategies amid the complexities of the pandemic as it unfolded. Balancing privacy, consent, and transparency and ensuring the responsible handling of sensitive information is crucial in maintaining public trust. We examine privacy-preserving techniques, ethical frameworks, and legal regulations aimed at safeguarding individuals' rights while harnessing the power of data. In our experience, conscientious data architecture provided a foundation for meaningful ethical applications of data products, which not only helped mitigate the current crisis, but also can provide valuable insights for better addressing future public health emergencies.
["datasets", "public health", "data management", "ethics"]
Pandemic Data Collection, Management, Analysis andDecision Support:A Large Urban University RetrospectiveNamrata [email protected] Ohio State UniversityColumbus, Ohio, USASteve [email protected] Supercomputer CenterColumbus, Ohio, USAAndrew [email protected] Ohio State UniversityColumbus, Ohio, USATanya Y. [email protected] Ohio State UniversityColumbus, Ohio, USAMikkel [email protected] Ohio State UniversityColumbus, Ohio, USAFigure 1. Archived OSU Safe & Healthy COVID-19 Dashboard for November 2, 2020AbstractThe COVID-19 pandemic has disrupted the world. Duringthis crisis, data has emerged as a critical resource for un-derstanding, monitoring, and mitigating the impact of thedisease. We present The Ohio State University’s data-drivenframework for comprehensive monitoring of the COVID-19pandemic. We discuss the challenges associated with datacollection and investigate the roles and limitations of dataPermission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).analysis in supporting intervention choice and implemen-tation strategies amid the complexities of the pandemic asit unfolded. Balancing privacy, consent, and transparencyand ensuring the responsible handling of sensitive infor-mation is crucial in maintaining public trust. We examineprivacy-preserving techniques, ethical frameworks, and legalregulations aimed at safeguarding individuals’ rights whileharnessing the power of data. In our experience, conscien-tious data architecture provided a foundation for meaningfulethical applications of data products, which not only helpedmitigate the current crisis, but also can provide valuable in-sights for better addressing future public health emergencies.CCS Concepts: •Information systems →Database ad-ministration ;•Applied computing →Health care infor-mation systems .Keywords: datasets, public health, data management, ethicsepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamACM Reference Format:Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam. 2023. Pandemic Data Collection, Man-agement, Analysis and Decision Support: A Large Urban Univer-sity Retrospective. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 8 pages.1 IntroductionThe onset of the COVID-19 pandemic in early 2020 was oneof the most significant and life changing events for everyoneon the planet, impacting everything from small businessesto entire countries. In case of educational institutions, the in-definite suspension of classes, upending of every traditionalaspect of academic and student life, and the transition tovirtual education was stressful for students, staff, and facultyalike. The Ohio State University (OSU), a large urban edu-cational institution, undertook a massive policy response tosupport the continuing function of the university by moni-toring and managing the dynamics of the pandemic on andaround its campuses. Putting together a coalition of epidemi-ologists, data scientists, public health policy makers wasonly the first step of what shaped up to be at least a threeyear marathon. Data was at the center of the whole process,both as the decision enabler and as the product of many ofthe contributing efforts. To make data actionable requiredthe work of many teams and several iterations of cleaning,analysis and inference, and visualization. In this paper, wepresent the overall data-focused aspects of the process, high-lighting the achievements and the hindrances, as well asthe major takeaways, so that we are better prepared for fu-ture public health emergencies or other large scale collectiveresponses. This manuscript, besides serving as a piece ofinstitutional memory, communicates in detail the variousobstacles encountered in the handling of the mammoth datafor the data science community to be aware of. Among themain takeaways we consider the effectiveness of the datadriven approaches for managing the pandemic response, theneed for an institutional data infrastructure, and the impor-tance of a well organized team of experts and professionalsworking together towards a well-defined goal.2 OverviewThe Ohio State University stood up the Comprehensive Mon-itoring Team (CMT) [ 4] to include a framework of supportfor data driven decisions for pandemic management, includ-ing robust case finding (via serial mass administration ofindividual PCR tests with rapid in-house processing), lo-cally administered isolation of cases, contact tracing andquarantine of close contacts, as well as data integration, anal-ysis, modelling, risk evaluation, policy recommendations,and intervention implementation based upon knowledge de-rived from individual case management, subsequent viral(genomic) sequencing, large scale syndromic surveillanceand evidence of environmental (wastewater and dust) shed-ding [ 6,12,14,15]. Here we present the core of the datacomponent of this system that integrated data from varioustesting centers, conducted daily analyses, and representeddata in formats usable by the leadership to support bothindividual level contact tracing and the university’s policyresponse to the public health emergency. In the coming sec-tions, we discuss the goal of setting up such a system, theimplementation pipeline, data sources and some of the chal-lenges and takeaways.3 GoalsBuilding and maintaining such a huge framework and em-ploying a whole workforce including faculty, students, health-care workers consumes university resources at a large scale.The goals were the result of several rapid iterations of con-vergent conversations between the university administrationand members of the CMT, as well as the consultations withexternal experts. The specific aims of the data componentsof the framework were as follows:•Tracking the positivity rate. Positivity rate or testingpositivity rate, defined as the percentage of tests reportedthat are positive [ 10], emerged early in the pandemic asthe agreed upon indicator of the state of the populationand the basis for comparing different populations [ 9]. Weused the positivity rate, throughout the monitoring processdue to a number of reasons, one of them being that thispercentage (sometimes a fraction) was the most expressiveand conveyed a more complete story than other measuressuch as absolute number of positive cases. It is true that100% of the university population was not being tested,because there were exemptions (medical and otherwise)and non-compliants, but we had the data necessary to de-termine exactly what fraction of the population was beingtested. This was the best metric that we could monitorfrom the data and information available to us at the time,and it never became a cause for concern.•Contact tracing. Removal of positive and potentially pos-itive cases from the population is key for suppressing thespread of the virus [ 8,17]. It was necessary to providecontact information for people who tested positive and toidentify and contact their close contacts in order to isolateand quarantine them, respectively.•Understanding the micro trends and risks based onevents. To understand the dynamics, the risks, and theimplications of the pandemic for various subpopulations itwas necessary to provide the ability to zoom in on specifictime intervals and subgroups in the data. Examples of thequestions asked include: How does fall break or Halloweenbehaviour change/impact infection rates? Is there an in-creased risk of students in a 4-person suite over a 2-personPandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAdorm room? How do the risks associated with in-personclasses compare with hybrid or remote classes?•Supporting daily policy decisions of a large urbanuniversity. Daily decisions supported by data includedthe choice of a testing strategy and protocol, transition tohybrid vs online only classes, occupancy in classrooms,vaccination and masking requirements, etc. Having accessto the right data was essential. The testing protocol [ 3,16] was more strict in the early days of the pandemic,requiring all students who live in residence halls or whohave at least one in-person class to test at least once everyweek. The requirements were relaxed in the subsequentsemesters. Testing mandates were also in place aroundholidays, for example, students were required to test beforea Thanksgiving break and after. The WiFi data was oftenutilized to get a sense of how many students were stillresiding in the dorms over the break, and how many wenthome.•Reducing burden in the wider population. OSU Colum-bus campus is a large urban campus with highly permeableboundary in the center of a city. In order to contain thepandemic, the infection rates needed to be controlled bothon and around campus. Moreover, the university sought tomitigate the export of infections to communities beyond itscampuses. College students mix with the city populationand visit their family over academic breaks, potentially in-creasing the risk of transmission to vulnerable communitymembers. Recommending and at times requiring testingbefore the academic breaks was one such measure takento reduce the burden on vulnerable immuno-compromisedpopulation outside the university.4 ImplementationOSU has 68,000 students, 12,000 of which reside in residencehalls during a regular year. During the pandemic, about 8,000students were in residence halls and were required to testweekly. Additional students, faculty, and staff were testingvoluntarily. At its peak, more than 30,000 tests per weekwere processed.Multiple teams across Information Technology support,Student Life, Translational Data Analytics Institute (TDAI),Infectious Disease Institute (IDI), University Medical Centers,College of Public Health, and many more were responsiblefor standing up a system that would be in place for at leastthe next 3 years. The data environment was a secure and flex-ible environment that allowed for dynamic data definitionand integration of data from at least 56 sources when it wasintroduced. (The number of data sources grew to over 100by the end of 2022.) Initial data sources included testing datatogether with the administrative data of student information,residence and permanent addresses, demographics, class reg-istration, residence layout, class and college affiliations, WiFiaccess point information, and much more. The pipeline isillustrated in Figure 2 and is described very briefly below.•Primary test data was transmitted into the internal securedata environment via electronic file transfer multiple timesa day.•Additional attributions from other internal OSU systems(Identity management (IDM), Student Information Systems(SIS), Student Life, etc.) were preloaded and updated accord-ing to the system’s change protocol (e.g. each semester).•Test results and internal data were combined into a cohe-sive reusable dataset (AKA the “gold table").•Analysts and dashboard builders utilized a common sourcefor all reports and visualizations.•Data was also sent to Helpspot/Salesforce to support caseinvestigation and contact tracing efforts.4.1 Data description and daily analysisAmong the 50+ tables and views that were maintained onAWS, there were 10-12 datasets, described below, that weremost frequently accessed for daily analysis reports.•‘Gold’ dataset of people : This view is derived from mul-tiple tables, that contain individuals’ unique identifiers,demographic information such as gender, race, ethnicity,age, home and campus address, affiliation with the univer-sity, affiliation with an OSU campus, indicators of whethertheir on or off campus, student housing residence, etc.There are roughly 2.5 million entries in this dataset, withupdates at regular time intervals of changing affiliations,addresses, and other variables.•‘Gold’ dataset of tests : Similar to the gold person table,this is also a derived view of data on tests administered bythe university that combines variables like test providername, test administered time, test result time, test result,type of test conducted, etc. It also contained some of thedemographic information and addresses so that quick re-sults could be obtained by running simple queries, withoutjoining multiple tables.•Dataset on off campus residence housing : This datasetcontains information on what organizations individualsare a member of, whether they are an active member,whether they live in the organization housing, etc. Thiswas a particularly useful dataset at the beginning of thepandemic as many outbreaks occurred in off-campus resi-dence houses, which were analyzed for patterns [13].•Dataset on contact tracing : Each actionable positive testresult generated a ticket, which is entered into a Sales-Force(TM) dataset of tickets. The metadata associated witheach ticket included a unique ticket identifier, the personwhose close contact this is, the person who is the close con-tact, both their information, the time and result of the test,whether that person had symptoms, whether that person isan OSU affiliate, etc. This dataset was important through-out the pandemic, since these tests and contacts were thefocus of most of the analyses. Also, this dataset containedepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamFigure 2. Data flow in the OSU COVID-19 monitoring pipeline.data on positive tests even if they were not present inthe gold test data table. This is because while the goldtable only recorded tests that were administered by theuniversity, the SalesForce(TM) tickets datasets containedinformation on other tests, some outside the university, aslong as they were positive. This dataset was thus a goodsource for absolute number of positives in the universitycommunity, but not very good for computing rates, due tothe absence of a denominator.•Datasets on class enrollment : When the university re-opened for the Fall after the summer of 2020, a lot of classeswere online, some were hybrid, and few were in-person.It was important to know if there was additional risk ofinfection for students enrolled in classes conducted in per-son, and decisions had to be made to combat the risk andspread of infections. The class enrollment datasets werekey in this effort.•Datasets on vaccination : Two datasets were maintainedthat contained vaccination information, one for studentsand one for employees (including staff). Although con-taining the same information in essence, the two werestructured differently. The tables for students containedtwo date variables, one denoting the date of dose received,and the other indicating the date when the individual be-comes fully vaccinated according to CDC guidelines. It alsohad variables corresponding to whether the individual hada vaccination exemption, whether the dose was CDC ap-proved, the CDC code (e.g, 208 for Pfizer) [ 2], whether theshot was a booster, etc. On the other hand, the employeevaccination table contained columns on first vaccinationdate, second vaccination date, up to seventh vaccinationdate and the provider information for each in additionto the exemption and booster indications. Thus, the dataanalysis needed to produce the same results from the twotables needed to be different.The initial daily analysis included breakdown of test posi-tivity rate in each of the residence halls, between demograph-ics, majors, and campuses. This was for internal consump-tion, pattern identification, and insight derivation. Much ofthis data and the derived analysis was private and was notmade public. The results that did make it to the dashboard[3], as shown in Figure 1, were the aggregate and summarynumbers on reproduction number, which is a standard epi-demiological metric [ 7], the daily number of cases, the 7-dayaverage, etc.1. Identification of close contacts of studentsresiding in dorms was a large part of the daily analysis andthe gold datasets were utilized to that end to produce a listof roommates and suitemates. A concise description of theanalysis performed was first published in an initial report [ 4]in October 2020 and updated in a second report[ 5] in March2021 by the CMT.5 ChallengesThe novelty, scale, and duration of the recent and ongoingpandemic were major challenges. Data collection, manage-ment, and analysis pipelines at this scale had no modernprecedent and had to be designed as they were beginning tobe used. Moreover, the timelines were drastically compressedand the requirements initially were changing frequently. Inaddition, some areas, such as close contacts or attendance ofevents, lacked data collection, and some critical data streams,including off-campus testing, were initially completely ab-sent. Further, as most teams around the world, we initiallylacked the full understanding of how to translate the ques-tions into data and how to prioritize the variables and theanalysis for decision support, particularly in the context ofhuman behavior. Below are some of the issues that posedsignificant challenges to the team.1The dashboard was awarded the A+ rating and selected as the best COVID-19 university dashboard by the “We Rate Covid Dashboards" panel of aca-demics [1]Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA5.1 Data cleaningThe data was collected from numerous sources, some ofwhich were manual entries and consequently had unavoid-able human error. For example, a table of people in the data-base had the OSU unique identification (name.#) as the pri-mary key, and the table of test results was supposed to havethe same as foreign key. Typographical errors or null valuesin this identifier column resulted in our inability to corre-spond a test to an individual, causing a non negligible shiftin the summary statistics. Once the problem had been iden-tified, there was joint effort to clean it up, combining morethan four data streams and reducing the number of uniden-tified tests to a number that would not change the inference.Yet, there were still a few individually unidentifiable entriesin the datasets, albeit not a high number to raise a concern.Minimizing manual entry to data sources can reduce suchissues by a considerable amount.A similar problem was found in the table for employeevaccination records, with clearly wrong dates of doses. Whilemost were due to errors, in some cases, employees were actu-ally part of vaccination trials and had received a dose beforeany vaccination received emergency use authorization orapproval for distribution to the general public. These caseswere indistinguishable from the erroneous cases withoutcareful manual investigation and knowledge of the regula-tory frameworks and timing of numerous vaccine candidatesfrom all over the world.One of the challenges that the team immediately encoun-tered while using demographic data was that there were anumber of similar datasets, curated by different organiza-tions at OSU, and used for different operational purposes.Re-purposing these for COVID-19 demographics analysisrequired that specific datasets and methodologies were em-ployed for consistency. Part of the Human Infrastructurethat was critical here were experts of the use of these legacydatasets to be able to share what nuances may have beenencoded in the data, and to help determine the least wrongdatasets and methods to use. This investigation eventuallyled to the creation of the "gold" datasets, which were sonamed because they were the COVID project’s Gold Stan-dard demographic associated with an individual or test.These examples illustrate the need for expert data curation,close scrutiny of analysis outputs that consumed these datasources, efforts to minimize manual data entry, and for closecollaboration with domain experts at every step.5.2 Data storage, backup, documentation, andrecoveryThe volume of data generated by testing mandates as well asvoluntary testing required careful consideration of large, yetquickly accessible and continuously backed up data storage.The ability to look up prior data was critical to understandingtrends and the dynamics of trends, as well as comparing theoutcomes of various past decisions. For continuously chang-ing data, such as the daily updated test data, it is needed tomaintain regular snapshots, checkpoints, and versions. Thisaspect was not fully appreciated initially and required sig-nificant efforts to redesign data architecture. We maintainedtwo ‘gold’ datasets, one corresponding to people and demo-graphics and one corresponding to tests’ metadata. Thesederived datasets were cleaned and organized to our stan-dards that would be the basis of further analysis. This cutdown on the work of individual analysts so that those clean-ing/organization steps would not need to be repeated. The‘gold’ data of people, consisting of faculty, staff, students,and everyone else affiliated in some way with the university,updates significantly every semester overwriting previousdata in the database (S3 environment). We would save a snap-shot of the data every semester, but unfortunately initiallythe snapshots were taken towards the end of the semesterswhen students had already started leaving the campus. As aresult of this, recently when we wanted to get a time seriesof positivity rates in residence halls, it was different fromthe original since we do not have the correct denominator.Recovering this information is possible, but requires integra-tion of other data sources, demanding significant investmentof resources, effort, and time. Majority of the people whowere part of the university supporting the CMT and were re-sponsible for setting up the system are no longer working atOSU. Moreover, early in the reopening of the university, theprimary focus was on managing the pandemic and bringingdown the positivity rate, and detailed documentation wasnot prioritized.Mid semester migration from one homegrown case datamanagement solution to an outside vendor was a major issuethat required major investment and retraining and we arecontinuing to deal with this today from a data and analysisperspective. Roughly from August 2020 to November 2020,we had our positive test (case) data ingested and case inves-tigation/contact tracing notes stored in a secured instanceof a HelpSpot database integrating in some instances withREDCap surveys and pushing out to several communicationplatforms, but later we shifted to a Salesforce Health Cloudbuild, which assisted with future testing data variations,vaccine information, as well as some automatic remindercommunications. The data had been migrated from the oldtable to the new one in theory, but in part user generated het-erogeneity, as well as version control issues in the HelpSpotsource data meant there continued to be gaps in the dataingested by Health Cloud (Salesforce) which do not have sim-ple workarounds for analysis of all variables. We maintainseveral tables for the test information storage, but there areinconsistencies across those tables. More than one tables ex-ist mainly because we derived simpler versions of tables withmany columns that are not relevant for day-to-day analysis.One of the (intermediate) mother tables recently had oneof its very important columns (the test specimen collectionepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quamtime/date column) dropped from an integration during anupdate, and it should have been okay to just look it up ina derived or other related testing table had there not beenmajor differences in the number of entries in the others.The IT organization at OSU, then known as the Officeof the CIO (OCIO) had embarked on a project prior to theCOVID epidemic to move OSU Enterprise data off premisesand onto Amazon Web Services (AWS). AWS was the obviouschoice as the data storage platform, as much of the data werealready present on the platform, and tools such as AmazonAthena were able to provide a layer of data abstraction sothat disparate datasets could be queried in a consistent man-ner. That OCIO project to house these data in a consistentmanner was fortunate; it would otherwise have added anadditional layer of processing to export and synthesize datafrom various legacy systems. The other major considera-tion is that there are significant costs of using a commercialcloud service. While these were covered in part by the OCIOproject, additional data storage for COVID data and the useof AWS tools such as Athena were incurred by the COVIDproject.5.3 Data governance and ethical considerationsThe university has a complex set of data governance regula-tions as do individuals’ private health information, whetherused in the healthcare or public health applications. Whilespecial authorization was granted to use some of the datain the pandemic emergency, security and privacy remainedstrict requirements. Each team member had training in han-dling secure and private data.In addition to the standard data governance issues, deal-ing with the high resolution personal data has its own setof ethical issues. Ultimately, the main question was: what isthe benefit of using a particular data source or performing aparticular analysis and would it change the decisions or thepandemic dynamics? If so, was it necessary to use individualand identifiable data for decision making or could aggregateor coded information have similar utility? For example, whileit is within the rights of the university to use the WiFi accesspoint information to “follow" an individual or to understandwho is within the same room, such information has a high‘icky factor’ and should be used sparingly. Moreover, whileinitially it seemed that WiFi data would provide a good proxyfor contact tracing, it turned out that the resolution of thedata did not correspond well to the physical definitions of acontact. Ultimately, it was decided to use WiFi data in aggre-gate to assess population movements rather than individuals’proximity to other individuals. For example, WiFi data wasused to estimate the number of students leaving campus overthe weekend or the number of students present in an “inperson" classroom. Moreover, the aggregate trends proved tobe much more robust than the individual-based analysis andwere significantly less time consuming. Additionally, adher-ence to the current applicable statutory guidelines for caseinvestigation, subsequent case management, and/or contacttracing may require some variation depending upon indi-viduals’ occupation, travel history, personal risk factors, im-munocompetence, vaccination status, which could includecertain specific preexisting conditions, medications, clini-cal care received, viral (variant/sub-variant) lineage, and/ordisease severity. However, specific individuals’ health infor-mation related to their experience with COVID-19 wouldlargely not meaningfully determine macro-level preventionpolicy or interventions in the university context indepen-dently from aggregate trends and information in the widerpublic health policy guidance, which are separately informedby individuals’ public health, laboratory testing and clini-cal health records. Therefore, particularly those sensitiveindividual level data, especially health data were collectedand subsequently shared only to the extent they would have‘meaningful use’, within the data user groups’ spheres ofcontrol, stated goals, and purview (i.e. healthcare providerswould have access to information relevant for managingpatient care; public health authorities would have access toinformation relevant to determining specific application ofdisease management protocols for individuals and/or groups;occupation health, workplace, and student life safety per-sonnel would have limited access to information relevantto adherence with applicable disease prevention laws andpolicies aimed at risk reduction, such as adherence to testing,vaccination, and isolation/ quarantine requirements in someinstances).6 Takeaways6.1 Behavior over analyticsThe main takeaway of our data-supported pandemic monitor-ing framework is the same as the main takeaway for dealingwith the COVID-19 pandemic world-wide: ultimately, themain determinant of the success of the system hinges onmodifiable human behavior, rather than the sophisticationof the analysis. No improvement in the accuracy of the anal-ysis of the effect of masking in a given setting (i.e. library,classroom, laboratory, or healthcare setting) is meaningfulif people would not (continue to) comply with an indoormask mandate. Similar limitations became apparent withboth pharmaceutical and non-pharmaceutical interventions,even as evidence increasingly substantiated benefits and newsub-variants emerged, populations’ apparent risk tolerancegrew and spread.6.2 Communication is keyWorking with a team this large, with people from vastlydiverse backgrounds, communication between the teamsbecomes an essential component. A major part of the anal-ysis was being carried out by graduate student employees,who were sometimes not aware of things like floor struc-ture in dorms, testing protocols, vaccination mandates, etc.,Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAwhich were important analysis components. Similarly, themodelling team was involved in building risk models, mod-els for testing strategy development, etc. that relied on do-main knowledge outside of mathematics or computer science.Clearly, experts in every relevant domain (epidemiology,public health, student residence life, university logistics andoperations, etc.) need to be constant partners in the analysis.6.3 Equity Considerations and singling outdemographic groupsWhen patterns appear to be emerging within specific groupsor sub-demographic, there may be an equity oriented op-portunity for targeting or strengthening an intervention butthere may also be a bias in the observed signal. One groupmay in fact be more often in situations involving exposure toinfectious persons, or engaged in more risky behavior thanothers, as we occasionally discovered from data analysis.However, available policy level changes may not have beenfeasible solutions and were not always ultimately enacted.What we started to see in the data raised questions on ethicsand trustworthiness of data enabled interventions, withoutcontext or corroboration. Some solutions aimed to addressone groups perceived or real deficiency in access to resourcesor excessive exposure could foster stigma, or loss of otherresources in unanticipated ways. After careful consideration,it was agreed that singling out a group was often not enoughof a value addition or could do more harm than good. In somecases, trends observed initially in one population or groupwere indicative of larger trends that could be addressed bypolicy shifts relevant to the whole community, which wouldaddress both the observed inequity and mitigate for knownunintended consequences.6.4 Micropatterns significant, but not usable inhindsightThe reflections on the decisions made over the course of threeyears showed that the micropatterns and the microtrendsobserved in the data had little to no effect on those decisions.Observations that a certain subgroup engaged in activitiesthat increased the risk of the spread of the infection did notprompt the authorities to take measures to shut down thoseactivities in many cases because it was either not cost effec-tive or unethical to do so. These data nuances did provideinformation but it was not actionable. In retrospect, however,the information’s main utility was in the fact that no singlecritical subgroup was the key to the solution. The scale of thephenomena did not lend itself to a single pathway of solutionor a single target group. Patterns that we learned in settingslike an early long term care facility were also observed laterin dorms, sorority and fraternity houses and athletics teamsand they led to better population level responses. A goodexample would be the limitations of certain kinds of testsfor transmission suppression. The Big10 testing programinvolved daily testing of athletes during their competitionseason, given that team members were often unable to maskand physically distance in some sports. Unfortunately, whentransmission started to increase rapidly in late autumn 2020as sports teams re-started their compressed seasons, evendaily testing with rapid results was insufficient to suppresstransmission, largely because the particular test used did notdetect all infectious individuals immediately. By the timeone tests positive on an antigen test, like those in use at thattime, a person may have already been infected and infec-tious for a few days, meaning potentially exposing othersand continuing transmission chains. Antigen tests are use-ful for rapid diagnosis particularly when symptomatic butare not always ideally suited for early enough detection toreduce spread in a serial testing model. OSU opted for devel-oping and deploying swift, minimally invasive (saliva based),highly specific, highly sensitive, PCR testing, shown to beable to detect pre-symptomatic and asymptomatic infections(eventually even processing results with its own PCR testingand sequencing lab capable of thousands of tests per day).Although they were not as fast as antigen tests, the aver-age turnaround time was less than 24 hours during much ofthe semesters’ most populated period. This was a scenariowhere tracking a micropattern in a particular well-observedand well-resourced group gave us really good information ofwhat and how we should be optimizing testing resources andworking within their limitations with the larger universitycommunity’s population.6.5 Data infrastructureThe overall data infrastructure consists of cyberinfrastruc-ture (compute, storage, networking, cloud and web services),information infrastructure (data and metadata management,search, archiving, cataloging, and digital services), and ana-lytics infrastructure (data integration, harmonization, andanalysis). The large volume of data collected, collection rate,distributed team setting, potential errors, inconsistencies,and variations in reporting standards, and changing objec-tives all strained and challenged existing data infrastructureat OSU and necessitated expansion of that infrastructure.Moreover, COVID-19 management provided a great case-study and emphasis on the fact that data infrastructure inte-grates cyber-, information, and data services infrastructuresthrough human infrastructure . Building the human infras-tructure is both the most critical aspect and the hardest toimplement of any data infrastructure. We have seen person-nel migrate out of the team, and the university, and whenthat happens, they take institutional knowledge with them.Replacing personnel in such a fast paced environment entailsa lot of rigorous training that newer team members have togo through within a very short period of time. Even afterbeing on board, it takes significant time to bring them up tospeed, which often creates a bottleneck.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam6.6 ScaleThe sheer volume of COVID-19 data generated from testingand vaccination overwhelmed existing data managementsystems of the university as well as the state. Scaling up datainfrastructure and analytical capabilities to handle large-scale data collection and analysis proved to be a significantchallenge, but one that can definitely be overcome.7 Comparison between similar systems inplace nationwideThe COVID-19 pandemic was monitored worldwide, andany attempt to track rates or contain the outbreaks had toinvolve systems governing huge amounts of data. Among thehumongous number of research papers out there utilizingthe pandemic data, very few of them talk about the nuancesof the data collection and storage mechanisms deployed. Forexample, a paper [ 18] from University of Michigan talksabout collecting environmental surveillance data in orderto estimate infection risk. This direction of research andanalysis was popular in a lot of organizations and was a goodmeans of estimating risk of infection within the campus fromsources like dust and sewage water, including OSU [ 6,14,15].Another paper [ 11] discusses digital health research andtracking in general, but in the light of the pandemic and howit impacted practices. Their concerns are very similar to ours,but unlike their generic view, we provide a complete storyof a real experience with a series of issues faced and tackledin an urban institute.8 ConclusionWe hope that the COVID-19 pandemic was a one-off uniqueevent, never to be repeated. Yet, we should be prepared torespond to a similar event by learning from our experience.We hope that the OSU CMT work presented here can servenot only as a blueprint, but as a guide for considerations,priorities, and potential pitfalls, should the response at thisscale be ever needed.AcknowledgmentsWe would like to acknowledge the work of many people whohave contributed to the effort of enabling the data driven ap-proach to monitoring and managing the COVID-19 pandemicat the Ohio State University: the entire Comprehensive Mon-itoring Team (CMT), Case Investigation and Contact TracingTeam, CMT student analysts, CMT/IDI Modeling Team, Ap-plied Microbiology Services Lab, Testing Operations Team,Student Life Isolation and Quarantine Team, Student HealthServices, Employee Health Services, local and state publichealth authorities, dashboard developers, and the OTDI team,including D&A data engineers, data governance team, net-work administrators, and enterprise security.References[1]A deeper dive into Ohio State’s top-rated COVID-19 testing datadashboard. https://news.osu.edu/a-deeper-dive-into-ohio-states-top-rated-covid-19-testing-data-dashboard . Accessed July 31, 2023.[2]IIS: HL7 Standard Code Set Mapping CVX to Vaccine Groups. https://www2.cdc.gov/vaccines/iis/iisstandards/vaccines.asp .[3]Safe and Healthy Buckeyes COVID-19 Dashboard (archived). https://safeandhealthy.osu.edu/dashboard . Accessed July 31, 2023.[4]Safe Campus Scientific Advisory Subgroup Recommendations.https://safeandhealthy.osu.edu/sites/default/files/2020/07/safe-campus_6.30.pdf . Accessed July 31, 2023.[5]The Ohio State University Comprehensive Monitoring Team — Report2. March 2, 2021. https://safeandhealthy.osu.edu/sites/default/files/2021/03/the_ohio_state_university_comprehensive_monitoring_team_-_report_2.pdf . Accessed July 31, 2023.[6]Tracking COVID-19 with dust at the ohio state university.https://sapac.illumina.com/company/news-center/feature-articles/tracking-covid-19-with-dust-at-the-ohio-state-university.html .Accessed July 31, 2023.[7]Achaiah, N. C., Subbarajasetty, S. B., and Shetty, R. M. R0 andre of COVID-19: Can we predict when the pandemic outbreak willbe contained? Indian journal of critical care medicine : peer-reviewed,official publication of Indian Society of Critical Care Medicine 24 , 11(Nov. 2020), 1125–1127.[8]Centers for Disease Control and Prevention . COVID-19Overview and Infection Prevention and Control Priorities in non-U.S. Healthcare Settings. https://www.cdc.gov/coronavirus/2019-ncov/hcp/non-us-settings/overview/index.html .[9]Dallal, A. A., Dallal, U. A., and Dallal, J. A. Positivity rate: anindicator for the spread of covid-19. Current Medical Research andOpinion 37 , 12 (2021), 2067–2076.[10] Doraiswamy, S., Mamtani, R., and Cheema, S. An in-depth analysisof 10 epidemiological terminologies used in the context of covid-19.SAGE Choice 50 , 6 (Dec. 2021), 819–826.[11] Dron, L., Kalatharan, V., Gupta, A., Haggstrom, J., Zariffa, N.,Morris, A. D., Arora, P., and Park, J. Data capture and sharing in theCOVID-19 pandemic: a cause for concern. The Lancet Digital Health 4 ,10 (Oct. 2022), E748–E756.[12] Dusen, J. V., LeBlanc, H., Renninger, N., Nastas, N., Panescu, J.,Smith, J. W., Sovic, M. G., Williams, A., Quam, M., Faith., S., andDannemiller, K. Identification of sars-cov-2 variants in indoor dust.InAssociation of Environmental Engineering and Science ProfessorsResearch and Education Conference 2023 (2022).[13] Krantz, M., Bleichrodt, A., and Quam, M. Housing diversityand sars-cov-2 transmission in a university setting. In QuantitativeMethodology Center 2022 Conference: Why Quantitative Research Mat-ters(2022).[14] Renninger, N., Nastasi, N., Bope, A., Cochran, S. J., Haines, S. R.,Balasubrahmaniam, N., Stuart, K., Bivins, A., Bibby, K., Hull, N. M.,and Dannemiller, K. C. Indoor Dust as a Matrix for Surveillance ofCOVID-19. ASM Journals 6 , 2 (Apr. 2021).[15] Wascher, M., Klaus, C., Alvarado, C., Bope, A., Panescu, J., Quam,M., Dannemiller, K., and Joseph, T. A mechanistic modeling andestimation framework for environmental pathogen surveillance. InSociety of Mathematical Biology Meeting, Mini-Symposium (2022).[16] Wascher, M., Schnell, P. M., Khudabukhsh, W. R., Quam, M., Tien,J. H., and Rempała, G. A. Monitoring SARS-COV-2 transmission andprevalence in population under repeated testing. medRxiv (2021).[17] World Health Organization . Clinical management of COVID-19.https://www.who.int/teams/health-care-readiness/covid-19 .[18] Zhang, X., Wu, J., Smith, L. M., Li, X., Yancey, O., Franzblau, A.,Dvonch, J. T., Xi, C., and Neitzel, R. L. Monitoring SARS-CoV-2 inair and on surfaces and estimating infection risk in buildings and buseson a university campus. Journal of Exposure Science and EnvironmentalEpidemiology 32 (2022), 751–758.
mYUxvVthARd
Good work for data monitoring and collection
4: Good paper, accept
The paper presents a specific diagram of COVID-19 data tracking, monitoring, and collection. The work is practically meaningful and valuable to various communities for future studies: 1. The paper presents a clear and comprehensive process from collecting test samples to final dashboard exhibitions, which provides valuable paradigm experience for data collecting and processing, particularly for college and education communities. 2. The collected data are valuable for public policy and AI modeling communities. For instance, the work uses Wifi data for individual monitoring and contact tracing, which may help establish contact networks and provide a better understanding of how disease can spread within schools. Despite the meanings of the data collection process, we wish to understand more about the collected data. For instance, it would be good to include non-private or non-sensitive statistical analysis and visualization of the data for the presentation or the final paper.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
BNU_N-7EIR
KDD.org/2023/Workshop/epiDAMIK
2023
Pandemic Data Collection, Management, Analysis and Decision Support: A Large Urban University Retrospective
["Namrata Banerji", "Steve Chang", "Andrew Perrault", "Tanya Berger-Wolf", "Mikkel Quam"]
The COVID-19 pandemic has disrupted the world. During this crisis, data has emerged as a critical resource for understanding, monitoring, and mitigating the impact of the disease. We present The Ohio State University's data-driven framework for comprehensive monitoring of the COVID-19 pandemic. We discuss the challenges associated with data collection, investigate the roles and limitations of data analysis in supporting intervention choice and implementation strategies amid the complexities of the pandemic as it unfolded. Balancing privacy, consent, and transparency and ensuring the responsible handling of sensitive information is crucial in maintaining public trust. We examine privacy-preserving techniques, ethical frameworks, and legal regulations aimed at safeguarding individuals' rights while harnessing the power of data. In our experience, conscientious data architecture provided a foundation for meaningful ethical applications of data products, which not only helped mitigate the current crisis, but also can provide valuable insights for better addressing future public health emergencies.
["datasets", "public health", "data management", "ethics"]
Pandemic Data Collection, Management, Analysis andDecision Support:A Large Urban University RetrospectiveNamrata [email protected] Ohio State UniversityColumbus, Ohio, USASteve [email protected] Supercomputer CenterColumbus, Ohio, USAAndrew [email protected] Ohio State UniversityColumbus, Ohio, USATanya Y. [email protected] Ohio State UniversityColumbus, Ohio, USAMikkel [email protected] Ohio State UniversityColumbus, Ohio, USAFigure 1. Archived OSU Safe & Healthy COVID-19 Dashboard for November 2, 2020AbstractThe COVID-19 pandemic has disrupted the world. Duringthis crisis, data has emerged as a critical resource for un-derstanding, monitoring, and mitigating the impact of thedisease. We present The Ohio State University’s data-drivenframework for comprehensive monitoring of the COVID-19pandemic. We discuss the challenges associated with datacollection and investigate the roles and limitations of dataPermission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).analysis in supporting intervention choice and implemen-tation strategies amid the complexities of the pandemic asit unfolded. Balancing privacy, consent, and transparencyand ensuring the responsible handling of sensitive infor-mation is crucial in maintaining public trust. We examineprivacy-preserving techniques, ethical frameworks, and legalregulations aimed at safeguarding individuals’ rights whileharnessing the power of data. In our experience, conscien-tious data architecture provided a foundation for meaningfulethical applications of data products, which not only helpedmitigate the current crisis, but also can provide valuable in-sights for better addressing future public health emergencies.CCS Concepts: •Information systems →Database ad-ministration ;•Applied computing →Health care infor-mation systems .Keywords: datasets, public health, data management, ethicsepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamACM Reference Format:Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam. 2023. Pandemic Data Collection, Man-agement, Analysis and Decision Support: A Large Urban Univer-sity Retrospective. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 8 pages.1 IntroductionThe onset of the COVID-19 pandemic in early 2020 was oneof the most significant and life changing events for everyoneon the planet, impacting everything from small businessesto entire countries. In case of educational institutions, the in-definite suspension of classes, upending of every traditionalaspect of academic and student life, and the transition tovirtual education was stressful for students, staff, and facultyalike. The Ohio State University (OSU), a large urban edu-cational institution, undertook a massive policy response tosupport the continuing function of the university by moni-toring and managing the dynamics of the pandemic on andaround its campuses. Putting together a coalition of epidemi-ologists, data scientists, public health policy makers wasonly the first step of what shaped up to be at least a threeyear marathon. Data was at the center of the whole process,both as the decision enabler and as the product of many ofthe contributing efforts. To make data actionable requiredthe work of many teams and several iterations of cleaning,analysis and inference, and visualization. In this paper, wepresent the overall data-focused aspects of the process, high-lighting the achievements and the hindrances, as well asthe major takeaways, so that we are better prepared for fu-ture public health emergencies or other large scale collectiveresponses. This manuscript, besides serving as a piece ofinstitutional memory, communicates in detail the variousobstacles encountered in the handling of the mammoth datafor the data science community to be aware of. Among themain takeaways we consider the effectiveness of the datadriven approaches for managing the pandemic response, theneed for an institutional data infrastructure, and the impor-tance of a well organized team of experts and professionalsworking together towards a well-defined goal.2 OverviewThe Ohio State University stood up the Comprehensive Mon-itoring Team (CMT) [ 4] to include a framework of supportfor data driven decisions for pandemic management, includ-ing robust case finding (via serial mass administration ofindividual PCR tests with rapid in-house processing), lo-cally administered isolation of cases, contact tracing andquarantine of close contacts, as well as data integration, anal-ysis, modelling, risk evaluation, policy recommendations,and intervention implementation based upon knowledge de-rived from individual case management, subsequent viral(genomic) sequencing, large scale syndromic surveillanceand evidence of environmental (wastewater and dust) shed-ding [ 6,12,14,15]. Here we present the core of the datacomponent of this system that integrated data from varioustesting centers, conducted daily analyses, and representeddata in formats usable by the leadership to support bothindividual level contact tracing and the university’s policyresponse to the public health emergency. In the coming sec-tions, we discuss the goal of setting up such a system, theimplementation pipeline, data sources and some of the chal-lenges and takeaways.3 GoalsBuilding and maintaining such a huge framework and em-ploying a whole workforce including faculty, students, health-care workers consumes university resources at a large scale.The goals were the result of several rapid iterations of con-vergent conversations between the university administrationand members of the CMT, as well as the consultations withexternal experts. The specific aims of the data componentsof the framework were as follows:•Tracking the positivity rate. Positivity rate or testingpositivity rate, defined as the percentage of tests reportedthat are positive [ 10], emerged early in the pandemic asthe agreed upon indicator of the state of the populationand the basis for comparing different populations [ 9]. Weused the positivity rate, throughout the monitoring processdue to a number of reasons, one of them being that thispercentage (sometimes a fraction) was the most expressiveand conveyed a more complete story than other measuressuch as absolute number of positive cases. It is true that100% of the university population was not being tested,because there were exemptions (medical and otherwise)and non-compliants, but we had the data necessary to de-termine exactly what fraction of the population was beingtested. This was the best metric that we could monitorfrom the data and information available to us at the time,and it never became a cause for concern.•Contact tracing. Removal of positive and potentially pos-itive cases from the population is key for suppressing thespread of the virus [ 8,17]. It was necessary to providecontact information for people who tested positive and toidentify and contact their close contacts in order to isolateand quarantine them, respectively.•Understanding the micro trends and risks based onevents. To understand the dynamics, the risks, and theimplications of the pandemic for various subpopulations itwas necessary to provide the ability to zoom in on specifictime intervals and subgroups in the data. Examples of thequestions asked include: How does fall break or Halloweenbehaviour change/impact infection rates? Is there an in-creased risk of students in a 4-person suite over a 2-personPandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAdorm room? How do the risks associated with in-personclasses compare with hybrid or remote classes?•Supporting daily policy decisions of a large urbanuniversity. Daily decisions supported by data includedthe choice of a testing strategy and protocol, transition tohybrid vs online only classes, occupancy in classrooms,vaccination and masking requirements, etc. Having accessto the right data was essential. The testing protocol [ 3,16] was more strict in the early days of the pandemic,requiring all students who live in residence halls or whohave at least one in-person class to test at least once everyweek. The requirements were relaxed in the subsequentsemesters. Testing mandates were also in place aroundholidays, for example, students were required to test beforea Thanksgiving break and after. The WiFi data was oftenutilized to get a sense of how many students were stillresiding in the dorms over the break, and how many wenthome.•Reducing burden in the wider population. OSU Colum-bus campus is a large urban campus with highly permeableboundary in the center of a city. In order to contain thepandemic, the infection rates needed to be controlled bothon and around campus. Moreover, the university sought tomitigate the export of infections to communities beyond itscampuses. College students mix with the city populationand visit their family over academic breaks, potentially in-creasing the risk of transmission to vulnerable communitymembers. Recommending and at times requiring testingbefore the academic breaks was one such measure takento reduce the burden on vulnerable immuno-compromisedpopulation outside the university.4 ImplementationOSU has 68,000 students, 12,000 of which reside in residencehalls during a regular year. During the pandemic, about 8,000students were in residence halls and were required to testweekly. Additional students, faculty, and staff were testingvoluntarily. At its peak, more than 30,000 tests per weekwere processed.Multiple teams across Information Technology support,Student Life, Translational Data Analytics Institute (TDAI),Infectious Disease Institute (IDI), University Medical Centers,College of Public Health, and many more were responsiblefor standing up a system that would be in place for at leastthe next 3 years. The data environment was a secure and flex-ible environment that allowed for dynamic data definitionand integration of data from at least 56 sources when it wasintroduced. (The number of data sources grew to over 100by the end of 2022.) Initial data sources included testing datatogether with the administrative data of student information,residence and permanent addresses, demographics, class reg-istration, residence layout, class and college affiliations, WiFiaccess point information, and much more. The pipeline isillustrated in Figure 2 and is described very briefly below.•Primary test data was transmitted into the internal securedata environment via electronic file transfer multiple timesa day.•Additional attributions from other internal OSU systems(Identity management (IDM), Student Information Systems(SIS), Student Life, etc.) were preloaded and updated accord-ing to the system’s change protocol (e.g. each semester).•Test results and internal data were combined into a cohe-sive reusable dataset (AKA the “gold table").•Analysts and dashboard builders utilized a common sourcefor all reports and visualizations.•Data was also sent to Helpspot/Salesforce to support caseinvestigation and contact tracing efforts.4.1 Data description and daily analysisAmong the 50+ tables and views that were maintained onAWS, there were 10-12 datasets, described below, that weremost frequently accessed for daily analysis reports.•‘Gold’ dataset of people : This view is derived from mul-tiple tables, that contain individuals’ unique identifiers,demographic information such as gender, race, ethnicity,age, home and campus address, affiliation with the univer-sity, affiliation with an OSU campus, indicators of whethertheir on or off campus, student housing residence, etc.There are roughly 2.5 million entries in this dataset, withupdates at regular time intervals of changing affiliations,addresses, and other variables.•‘Gold’ dataset of tests : Similar to the gold person table,this is also a derived view of data on tests administered bythe university that combines variables like test providername, test administered time, test result time, test result,type of test conducted, etc. It also contained some of thedemographic information and addresses so that quick re-sults could be obtained by running simple queries, withoutjoining multiple tables.•Dataset on off campus residence housing : This datasetcontains information on what organizations individualsare a member of, whether they are an active member,whether they live in the organization housing, etc. Thiswas a particularly useful dataset at the beginning of thepandemic as many outbreaks occurred in off-campus resi-dence houses, which were analyzed for patterns [13].•Dataset on contact tracing : Each actionable positive testresult generated a ticket, which is entered into a Sales-Force(TM) dataset of tickets. The metadata associated witheach ticket included a unique ticket identifier, the personwhose close contact this is, the person who is the close con-tact, both their information, the time and result of the test,whether that person had symptoms, whether that person isan OSU affiliate, etc. This dataset was important through-out the pandemic, since these tests and contacts were thefocus of most of the analyses. Also, this dataset containedepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamFigure 2. Data flow in the OSU COVID-19 monitoring pipeline.data on positive tests even if they were not present inthe gold test data table. This is because while the goldtable only recorded tests that were administered by theuniversity, the SalesForce(TM) tickets datasets containedinformation on other tests, some outside the university, aslong as they were positive. This dataset was thus a goodsource for absolute number of positives in the universitycommunity, but not very good for computing rates, due tothe absence of a denominator.•Datasets on class enrollment : When the university re-opened for the Fall after the summer of 2020, a lot of classeswere online, some were hybrid, and few were in-person.It was important to know if there was additional risk ofinfection for students enrolled in classes conducted in per-son, and decisions had to be made to combat the risk andspread of infections. The class enrollment datasets werekey in this effort.•Datasets on vaccination : Two datasets were maintainedthat contained vaccination information, one for studentsand one for employees (including staff). Although con-taining the same information in essence, the two werestructured differently. The tables for students containedtwo date variables, one denoting the date of dose received,and the other indicating the date when the individual be-comes fully vaccinated according to CDC guidelines. It alsohad variables corresponding to whether the individual hada vaccination exemption, whether the dose was CDC ap-proved, the CDC code (e.g, 208 for Pfizer) [ 2], whether theshot was a booster, etc. On the other hand, the employeevaccination table contained columns on first vaccinationdate, second vaccination date, up to seventh vaccinationdate and the provider information for each in additionto the exemption and booster indications. Thus, the dataanalysis needed to produce the same results from the twotables needed to be different.The initial daily analysis included breakdown of test posi-tivity rate in each of the residence halls, between demograph-ics, majors, and campuses. This was for internal consump-tion, pattern identification, and insight derivation. Much ofthis data and the derived analysis was private and was notmade public. The results that did make it to the dashboard[3], as shown in Figure 1, were the aggregate and summarynumbers on reproduction number, which is a standard epi-demiological metric [ 7], the daily number of cases, the 7-dayaverage, etc.1. Identification of close contacts of studentsresiding in dorms was a large part of the daily analysis andthe gold datasets were utilized to that end to produce a listof roommates and suitemates. A concise description of theanalysis performed was first published in an initial report [ 4]in October 2020 and updated in a second report[ 5] in March2021 by the CMT.5 ChallengesThe novelty, scale, and duration of the recent and ongoingpandemic were major challenges. Data collection, manage-ment, and analysis pipelines at this scale had no modernprecedent and had to be designed as they were beginning tobe used. Moreover, the timelines were drastically compressedand the requirements initially were changing frequently. Inaddition, some areas, such as close contacts or attendance ofevents, lacked data collection, and some critical data streams,including off-campus testing, were initially completely ab-sent. Further, as most teams around the world, we initiallylacked the full understanding of how to translate the ques-tions into data and how to prioritize the variables and theanalysis for decision support, particularly in the context ofhuman behavior. Below are some of the issues that posedsignificant challenges to the team.1The dashboard was awarded the A+ rating and selected as the best COVID-19 university dashboard by the “We Rate Covid Dashboards" panel of aca-demics [1]Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA5.1 Data cleaningThe data was collected from numerous sources, some ofwhich were manual entries and consequently had unavoid-able human error. For example, a table of people in the data-base had the OSU unique identification (name.#) as the pri-mary key, and the table of test results was supposed to havethe same as foreign key. Typographical errors or null valuesin this identifier column resulted in our inability to corre-spond a test to an individual, causing a non negligible shiftin the summary statistics. Once the problem had been iden-tified, there was joint effort to clean it up, combining morethan four data streams and reducing the number of uniden-tified tests to a number that would not change the inference.Yet, there were still a few individually unidentifiable entriesin the datasets, albeit not a high number to raise a concern.Minimizing manual entry to data sources can reduce suchissues by a considerable amount.A similar problem was found in the table for employeevaccination records, with clearly wrong dates of doses. Whilemost were due to errors, in some cases, employees were actu-ally part of vaccination trials and had received a dose beforeany vaccination received emergency use authorization orapproval for distribution to the general public. These caseswere indistinguishable from the erroneous cases withoutcareful manual investigation and knowledge of the regula-tory frameworks and timing of numerous vaccine candidatesfrom all over the world.One of the challenges that the team immediately encoun-tered while using demographic data was that there were anumber of similar datasets, curated by different organiza-tions at OSU, and used for different operational purposes.Re-purposing these for COVID-19 demographics analysisrequired that specific datasets and methodologies were em-ployed for consistency. Part of the Human Infrastructurethat was critical here were experts of the use of these legacydatasets to be able to share what nuances may have beenencoded in the data, and to help determine the least wrongdatasets and methods to use. This investigation eventuallyled to the creation of the "gold" datasets, which were sonamed because they were the COVID project’s Gold Stan-dard demographic associated with an individual or test.These examples illustrate the need for expert data curation,close scrutiny of analysis outputs that consumed these datasources, efforts to minimize manual data entry, and for closecollaboration with domain experts at every step.5.2 Data storage, backup, documentation, andrecoveryThe volume of data generated by testing mandates as well asvoluntary testing required careful consideration of large, yetquickly accessible and continuously backed up data storage.The ability to look up prior data was critical to understandingtrends and the dynamics of trends, as well as comparing theoutcomes of various past decisions. For continuously chang-ing data, such as the daily updated test data, it is needed tomaintain regular snapshots, checkpoints, and versions. Thisaspect was not fully appreciated initially and required sig-nificant efforts to redesign data architecture. We maintainedtwo ‘gold’ datasets, one corresponding to people and demo-graphics and one corresponding to tests’ metadata. Thesederived datasets were cleaned and organized to our stan-dards that would be the basis of further analysis. This cutdown on the work of individual analysts so that those clean-ing/organization steps would not need to be repeated. The‘gold’ data of people, consisting of faculty, staff, students,and everyone else affiliated in some way with the university,updates significantly every semester overwriting previousdata in the database (S3 environment). We would save a snap-shot of the data every semester, but unfortunately initiallythe snapshots were taken towards the end of the semesterswhen students had already started leaving the campus. As aresult of this, recently when we wanted to get a time seriesof positivity rates in residence halls, it was different fromthe original since we do not have the correct denominator.Recovering this information is possible, but requires integra-tion of other data sources, demanding significant investmentof resources, effort, and time. Majority of the people whowere part of the university supporting the CMT and were re-sponsible for setting up the system are no longer working atOSU. Moreover, early in the reopening of the university, theprimary focus was on managing the pandemic and bringingdown the positivity rate, and detailed documentation wasnot prioritized.Mid semester migration from one homegrown case datamanagement solution to an outside vendor was a major issuethat required major investment and retraining and we arecontinuing to deal with this today from a data and analysisperspective. Roughly from August 2020 to November 2020,we had our positive test (case) data ingested and case inves-tigation/contact tracing notes stored in a secured instanceof a HelpSpot database integrating in some instances withREDCap surveys and pushing out to several communicationplatforms, but later we shifted to a Salesforce Health Cloudbuild, which assisted with future testing data variations,vaccine information, as well as some automatic remindercommunications. The data had been migrated from the oldtable to the new one in theory, but in part user generated het-erogeneity, as well as version control issues in the HelpSpotsource data meant there continued to be gaps in the dataingested by Health Cloud (Salesforce) which do not have sim-ple workarounds for analysis of all variables. We maintainseveral tables for the test information storage, but there areinconsistencies across those tables. More than one tables ex-ist mainly because we derived simpler versions of tables withmany columns that are not relevant for day-to-day analysis.One of the (intermediate) mother tables recently had oneof its very important columns (the test specimen collectionepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quamtime/date column) dropped from an integration during anupdate, and it should have been okay to just look it up ina derived or other related testing table had there not beenmajor differences in the number of entries in the others.The IT organization at OSU, then known as the Officeof the CIO (OCIO) had embarked on a project prior to theCOVID epidemic to move OSU Enterprise data off premisesand onto Amazon Web Services (AWS). AWS was the obviouschoice as the data storage platform, as much of the data werealready present on the platform, and tools such as AmazonAthena were able to provide a layer of data abstraction sothat disparate datasets could be queried in a consistent man-ner. That OCIO project to house these data in a consistentmanner was fortunate; it would otherwise have added anadditional layer of processing to export and synthesize datafrom various legacy systems. The other major considera-tion is that there are significant costs of using a commercialcloud service. While these were covered in part by the OCIOproject, additional data storage for COVID data and the useof AWS tools such as Athena were incurred by the COVIDproject.5.3 Data governance and ethical considerationsThe university has a complex set of data governance regula-tions as do individuals’ private health information, whetherused in the healthcare or public health applications. Whilespecial authorization was granted to use some of the datain the pandemic emergency, security and privacy remainedstrict requirements. Each team member had training in han-dling secure and private data.In addition to the standard data governance issues, deal-ing with the high resolution personal data has its own setof ethical issues. Ultimately, the main question was: what isthe benefit of using a particular data source or performing aparticular analysis and would it change the decisions or thepandemic dynamics? If so, was it necessary to use individualand identifiable data for decision making or could aggregateor coded information have similar utility? For example, whileit is within the rights of the university to use the WiFi accesspoint information to “follow" an individual or to understandwho is within the same room, such information has a high‘icky factor’ and should be used sparingly. Moreover, whileinitially it seemed that WiFi data would provide a good proxyfor contact tracing, it turned out that the resolution of thedata did not correspond well to the physical definitions of acontact. Ultimately, it was decided to use WiFi data in aggre-gate to assess population movements rather than individuals’proximity to other individuals. For example, WiFi data wasused to estimate the number of students leaving campus overthe weekend or the number of students present in an “inperson" classroom. Moreover, the aggregate trends proved tobe much more robust than the individual-based analysis andwere significantly less time consuming. Additionally, adher-ence to the current applicable statutory guidelines for caseinvestigation, subsequent case management, and/or contacttracing may require some variation depending upon indi-viduals’ occupation, travel history, personal risk factors, im-munocompetence, vaccination status, which could includecertain specific preexisting conditions, medications, clini-cal care received, viral (variant/sub-variant) lineage, and/ordisease severity. However, specific individuals’ health infor-mation related to their experience with COVID-19 wouldlargely not meaningfully determine macro-level preventionpolicy or interventions in the university context indepen-dently from aggregate trends and information in the widerpublic health policy guidance, which are separately informedby individuals’ public health, laboratory testing and clini-cal health records. Therefore, particularly those sensitiveindividual level data, especially health data were collectedand subsequently shared only to the extent they would have‘meaningful use’, within the data user groups’ spheres ofcontrol, stated goals, and purview (i.e. healthcare providerswould have access to information relevant for managingpatient care; public health authorities would have access toinformation relevant to determining specific application ofdisease management protocols for individuals and/or groups;occupation health, workplace, and student life safety per-sonnel would have limited access to information relevantto adherence with applicable disease prevention laws andpolicies aimed at risk reduction, such as adherence to testing,vaccination, and isolation/ quarantine requirements in someinstances).6 Takeaways6.1 Behavior over analyticsThe main takeaway of our data-supported pandemic monitor-ing framework is the same as the main takeaway for dealingwith the COVID-19 pandemic world-wide: ultimately, themain determinant of the success of the system hinges onmodifiable human behavior, rather than the sophisticationof the analysis. No improvement in the accuracy of the anal-ysis of the effect of masking in a given setting (i.e. library,classroom, laboratory, or healthcare setting) is meaningfulif people would not (continue to) comply with an indoormask mandate. Similar limitations became apparent withboth pharmaceutical and non-pharmaceutical interventions,even as evidence increasingly substantiated benefits and newsub-variants emerged, populations’ apparent risk tolerancegrew and spread.6.2 Communication is keyWorking with a team this large, with people from vastlydiverse backgrounds, communication between the teamsbecomes an essential component. A major part of the anal-ysis was being carried out by graduate student employees,who were sometimes not aware of things like floor struc-ture in dorms, testing protocols, vaccination mandates, etc.,Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAwhich were important analysis components. Similarly, themodelling team was involved in building risk models, mod-els for testing strategy development, etc. that relied on do-main knowledge outside of mathematics or computer science.Clearly, experts in every relevant domain (epidemiology,public health, student residence life, university logistics andoperations, etc.) need to be constant partners in the analysis.6.3 Equity Considerations and singling outdemographic groupsWhen patterns appear to be emerging within specific groupsor sub-demographic, there may be an equity oriented op-portunity for targeting or strengthening an intervention butthere may also be a bias in the observed signal. One groupmay in fact be more often in situations involving exposure toinfectious persons, or engaged in more risky behavior thanothers, as we occasionally discovered from data analysis.However, available policy level changes may not have beenfeasible solutions and were not always ultimately enacted.What we started to see in the data raised questions on ethicsand trustworthiness of data enabled interventions, withoutcontext or corroboration. Some solutions aimed to addressone groups perceived or real deficiency in access to resourcesor excessive exposure could foster stigma, or loss of otherresources in unanticipated ways. After careful consideration,it was agreed that singling out a group was often not enoughof a value addition or could do more harm than good. In somecases, trends observed initially in one population or groupwere indicative of larger trends that could be addressed bypolicy shifts relevant to the whole community, which wouldaddress both the observed inequity and mitigate for knownunintended consequences.6.4 Micropatterns significant, but not usable inhindsightThe reflections on the decisions made over the course of threeyears showed that the micropatterns and the microtrendsobserved in the data had little to no effect on those decisions.Observations that a certain subgroup engaged in activitiesthat increased the risk of the spread of the infection did notprompt the authorities to take measures to shut down thoseactivities in many cases because it was either not cost effec-tive or unethical to do so. These data nuances did provideinformation but it was not actionable. In retrospect, however,the information’s main utility was in the fact that no singlecritical subgroup was the key to the solution. The scale of thephenomena did not lend itself to a single pathway of solutionor a single target group. Patterns that we learned in settingslike an early long term care facility were also observed laterin dorms, sorority and fraternity houses and athletics teamsand they led to better population level responses. A goodexample would be the limitations of certain kinds of testsfor transmission suppression. The Big10 testing programinvolved daily testing of athletes during their competitionseason, given that team members were often unable to maskand physically distance in some sports. Unfortunately, whentransmission started to increase rapidly in late autumn 2020as sports teams re-started their compressed seasons, evendaily testing with rapid results was insufficient to suppresstransmission, largely because the particular test used did notdetect all infectious individuals immediately. By the timeone tests positive on an antigen test, like those in use at thattime, a person may have already been infected and infec-tious for a few days, meaning potentially exposing othersand continuing transmission chains. Antigen tests are use-ful for rapid diagnosis particularly when symptomatic butare not always ideally suited for early enough detection toreduce spread in a serial testing model. OSU opted for devel-oping and deploying swift, minimally invasive (saliva based),highly specific, highly sensitive, PCR testing, shown to beable to detect pre-symptomatic and asymptomatic infections(eventually even processing results with its own PCR testingand sequencing lab capable of thousands of tests per day).Although they were not as fast as antigen tests, the aver-age turnaround time was less than 24 hours during much ofthe semesters’ most populated period. This was a scenariowhere tracking a micropattern in a particular well-observedand well-resourced group gave us really good information ofwhat and how we should be optimizing testing resources andworking within their limitations with the larger universitycommunity’s population.6.5 Data infrastructureThe overall data infrastructure consists of cyberinfrastruc-ture (compute, storage, networking, cloud and web services),information infrastructure (data and metadata management,search, archiving, cataloging, and digital services), and ana-lytics infrastructure (data integration, harmonization, andanalysis). The large volume of data collected, collection rate,distributed team setting, potential errors, inconsistencies,and variations in reporting standards, and changing objec-tives all strained and challenged existing data infrastructureat OSU and necessitated expansion of that infrastructure.Moreover, COVID-19 management provided a great case-study and emphasis on the fact that data infrastructure inte-grates cyber-, information, and data services infrastructuresthrough human infrastructure . Building the human infras-tructure is both the most critical aspect and the hardest toimplement of any data infrastructure. We have seen person-nel migrate out of the team, and the university, and whenthat happens, they take institutional knowledge with them.Replacing personnel in such a fast paced environment entailsa lot of rigorous training that newer team members have togo through within a very short period of time. Even afterbeing on board, it takes significant time to bring them up tospeed, which often creates a bottleneck.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam6.6 ScaleThe sheer volume of COVID-19 data generated from testingand vaccination overwhelmed existing data managementsystems of the university as well as the state. Scaling up datainfrastructure and analytical capabilities to handle large-scale data collection and analysis proved to be a significantchallenge, but one that can definitely be overcome.7 Comparison between similar systems inplace nationwideThe COVID-19 pandemic was monitored worldwide, andany attempt to track rates or contain the outbreaks had toinvolve systems governing huge amounts of data. Among thehumongous number of research papers out there utilizingthe pandemic data, very few of them talk about the nuancesof the data collection and storage mechanisms deployed. Forexample, a paper [ 18] from University of Michigan talksabout collecting environmental surveillance data in orderto estimate infection risk. This direction of research andanalysis was popular in a lot of organizations and was a goodmeans of estimating risk of infection within the campus fromsources like dust and sewage water, including OSU [ 6,14,15].Another paper [ 11] discusses digital health research andtracking in general, but in the light of the pandemic and howit impacted practices. Their concerns are very similar to ours,but unlike their generic view, we provide a complete storyof a real experience with a series of issues faced and tackledin an urban institute.8 ConclusionWe hope that the COVID-19 pandemic was a one-off uniqueevent, never to be repeated. Yet, we should be prepared torespond to a similar event by learning from our experience.We hope that the OSU CMT work presented here can servenot only as a blueprint, but as a guide for considerations,priorities, and potential pitfalls, should the response at thisscale be ever needed.AcknowledgmentsWe would like to acknowledge the work of many people whohave contributed to the effort of enabling the data driven ap-proach to monitoring and managing the COVID-19 pandemicat the Ohio State University: the entire Comprehensive Mon-itoring Team (CMT), Case Investigation and Contact TracingTeam, CMT student analysts, CMT/IDI Modeling Team, Ap-plied Microbiology Services Lab, Testing Operations Team,Student Life Isolation and Quarantine Team, Student HealthServices, Employee Health Services, local and state publichealth authorities, dashboard developers, and the OTDI team,including D&A data engineers, data governance team, net-work administrators, and enterprise security.References[1]A deeper dive into Ohio State’s top-rated COVID-19 testing datadashboard. https://news.osu.edu/a-deeper-dive-into-ohio-states-top-rated-covid-19-testing-data-dashboard . Accessed July 31, 2023.[2]IIS: HL7 Standard Code Set Mapping CVX to Vaccine Groups. https://www2.cdc.gov/vaccines/iis/iisstandards/vaccines.asp .[3]Safe and Healthy Buckeyes COVID-19 Dashboard (archived). https://safeandhealthy.osu.edu/dashboard . Accessed July 31, 2023.[4]Safe Campus Scientific Advisory Subgroup Recommendations.https://safeandhealthy.osu.edu/sites/default/files/2020/07/safe-campus_6.30.pdf . Accessed July 31, 2023.[5]The Ohio State University Comprehensive Monitoring Team — Report2. March 2, 2021. https://safeandhealthy.osu.edu/sites/default/files/2021/03/the_ohio_state_university_comprehensive_monitoring_team_-_report_2.pdf . Accessed July 31, 2023.[6]Tracking COVID-19 with dust at the ohio state university.https://sapac.illumina.com/company/news-center/feature-articles/tracking-covid-19-with-dust-at-the-ohio-state-university.html .Accessed July 31, 2023.[7]Achaiah, N. C., Subbarajasetty, S. B., and Shetty, R. M. R0 andre of COVID-19: Can we predict when the pandemic outbreak willbe contained? Indian journal of critical care medicine : peer-reviewed,official publication of Indian Society of Critical Care Medicine 24 , 11(Nov. 2020), 1125–1127.[8]Centers for Disease Control and Prevention . COVID-19Overview and Infection Prevention and Control Priorities in non-U.S. Healthcare Settings. https://www.cdc.gov/coronavirus/2019-ncov/hcp/non-us-settings/overview/index.html .[9]Dallal, A. A., Dallal, U. A., and Dallal, J. A. Positivity rate: anindicator for the spread of covid-19. Current Medical Research andOpinion 37 , 12 (2021), 2067–2076.[10] Doraiswamy, S., Mamtani, R., and Cheema, S. An in-depth analysisof 10 epidemiological terminologies used in the context of covid-19.SAGE Choice 50 , 6 (Dec. 2021), 819–826.[11] Dron, L., Kalatharan, V., Gupta, A., Haggstrom, J., Zariffa, N.,Morris, A. D., Arora, P., and Park, J. Data capture and sharing in theCOVID-19 pandemic: a cause for concern. The Lancet Digital Health 4 ,10 (Oct. 2022), E748–E756.[12] Dusen, J. V., LeBlanc, H., Renninger, N., Nastas, N., Panescu, J.,Smith, J. W., Sovic, M. G., Williams, A., Quam, M., Faith., S., andDannemiller, K. Identification of sars-cov-2 variants in indoor dust.InAssociation of Environmental Engineering and Science ProfessorsResearch and Education Conference 2023 (2022).[13] Krantz, M., Bleichrodt, A., and Quam, M. Housing diversityand sars-cov-2 transmission in a university setting. In QuantitativeMethodology Center 2022 Conference: Why Quantitative Research Mat-ters(2022).[14] Renninger, N., Nastasi, N., Bope, A., Cochran, S. J., Haines, S. R.,Balasubrahmaniam, N., Stuart, K., Bivins, A., Bibby, K., Hull, N. M.,and Dannemiller, K. C. Indoor Dust as a Matrix for Surveillance ofCOVID-19. ASM Journals 6 , 2 (Apr. 2021).[15] Wascher, M., Klaus, C., Alvarado, C., Bope, A., Panescu, J., Quam,M., Dannemiller, K., and Joseph, T. A mechanistic modeling andestimation framework for environmental pathogen surveillance. InSociety of Mathematical Biology Meeting, Mini-Symposium (2022).[16] Wascher, M., Schnell, P. M., Khudabukhsh, W. R., Quam, M., Tien,J. H., and Rempała, G. A. Monitoring SARS-COV-2 transmission andprevalence in population under repeated testing. medRxiv (2021).[17] World Health Organization . Clinical management of COVID-19.https://www.who.int/teams/health-care-readiness/covid-19 .[18] Zhang, X., Wu, J., Smith, L. M., Li, X., Yancey, O., Franzblau, A.,Dvonch, J. T., Xi, C., and Neitzel, R. L. Monitoring SARS-CoV-2 inair and on surfaces and estimating infection risk in buildings and buseson a university campus. Journal of Exposure Science and EnvironmentalEpidemiology 32 (2022), 751–758.
cT3TaN8D9er
Good experience sharing but not a well-formed research paper
1: Ok but not good enough - rejection
Summary Of The Review: This paper discusses the role of data in managing the COVID-19 pandemic, focusing on its collection, management, analysis, and application in decision-making. The authors present The Ohio State University's data-driven framework for monitoring the pandemic, including data sources, methods, and technologies used for case finding, contact tracing, and visualization. They discuss challenges such as privacy concerns, data quality, and the need for harmonization across different sources. The paper also explores ethical considerations in data usage during the pandemic. The authors highlight the importance of data architecture, teamwork, and ethical frameworks in addressing public health emergencies. The paper concludes with key takeaways and lessons learned for future public health emergencies. Pros: 1. The authors possess extensive knowledge and experience in managing COVID-19 at The Ohio State University, providing valuable insights and practical examples that can benefit other systems. 2. The paper offers a comprehensive discussion of the university's policies. Cons: 1. Lack of references and comparisons The paper lacks citations to substantiate and compare the findings and approaches presented, which is a notable deficiency. For instance, it is essential to clarify the definition of the "positivity rate" in this paper and provide a rationale for its use, which would benefit from external support. Additionally, the paper displays "R(t) Numbers for Ohio" in Figure 1, but fails to mention or discuss this important metric in the text, warranting a comprehensive review of the relevant literature to enhance its explanation. Similarly, the utilization of terms like "gold table" and "'gold' data of people" could be less perplexing if supported by appropriate references. Furthermore, considering the abundance of existing pandemic surveillance systems, it would be advantageous to examine their operational mechanisms. This examination would enable the identification and comparison of the strengths and weaknesses of the system presented in the paper. 2. Ambiguous statements Several statements in the paper lack clarity and precision, leading to confusion among readers. Additionally, the paper fails to provide a comprehensive summary of the data entries presented in the tables, leaving readers unsure about the specific information contained within them. Moreover, the paper lacks explicit descriptions of tasks, analyses, or well-defined evaluations, despite drawing conclusions and using phrases such as “No improvement in the accuracy of the analysis of the effect of masking in a given setting” and “After careful consideration, it was agreed that singling out a group was often not enough of a value addition or could do more harm than good.” Here are additional instances of imprecise terminology lacking explicit definitions or thorough evaluations of their scope. Section 5.1: “Typographical errors or null values in this identifier column resulted in a non negligible shift in **the summary statistics**, given the **enormous number** of tests conducted. Once the problem had been identified, there was **joint effort to clean it up**, combining **more than four** data streams and reducing the number of unidentified tests to **a number** that would not change **the inference**. Yet, there were still **a few** individually unidentifiable entries in the datasets, albeit **not enough a number** to raise a concern. Minimizing manual entry to data sources can reduce such issues by **a considerable amount**.” Section 5.2: “The data had been migrated from the old table to the new one in theory, but in part **user generated heterogeneity**, as well as version control issues in the HelpSpot source data meant there continued to be **gaps** in the data ingested by Health Cloud (Salesforce) which do not have simple workarounds for analysis of all variables. We maintain **several tables** for the test information storage, but there are **inconsistencies** across those tables. More than one tables exist mainly because we derived simpler versions of tables with many columns that are not relevant for **day-to-day analysis**.” Section 7.2 “**One group** may in fact be more often in situations involving exposure to infectious persons, or engaged in more risky behavior than others, as we **occasionally** discovered from data analysis. However, available policy level changes **may not have been feasible solutions** and were not always ultimately enacted.” 3. Unaddressed privacy concerns The authors argued that they would “examine privacy-preserving techniques” and “security and privacy remained strict requirements”. While in section 7.2, the author also said “while it is within the rights of the university to use the WiFi access point information to “follow" an individual or to understand who is within the same room, such information has a high ’icky factor’ and should be used sparingly.” Despite this, “it was decided to use WiFi data in aggregate to assess population movements rather than individuals’ proximity to other individuals”. Furthermore, the data is “shared”; “health data were collected and subsequently shared only to the extent they would have ’meaningful use”. It would be useful to clarify who it was shared to, what was shared, what training team members had, and describe in more detail the type of data that is collected and disseminated from tracking student’s WiFi locations, seemingly without their knowledge or permission. 4. Other minor problems Figure 1 in the paper was presented without any accompanying explanations, leading to confusion among readers. The lack of clarification makes it difficult to comprehend the purpose and significance of the "Personal Protective Equipment" and "Enhanced Cleaning" sections, both of which are represented by equal green circles in the figure. Also, “Behavior over analytics“ should be section 6.1 rather than section 7. In general, the paper offers valuable experience in data management during the COVID-19 pandemic. However, there are several areas that require improvement. First, the paper should include more references and comparisons to support its findings and approaches. Additionally, the analysis section would benefit from a more detailed explanation of the methodologies employed. The clarity and logical presentation of results and takeaways also need to be enhanced. Furthermore, the paper should address privacy concerns associated with the data management practices discussed.
3: The reviewer is fairly confident that the evaluation is correct